From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 03:36:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B330445A for ; Sun, 28 Apr 2013 03:36:31 +0000 (UTC) (envelope-from cloundcoder@gmail.com) Received: from mail-ve0-x229.google.com (mail-ve0-x229.google.com [IPv6:2607:f8b0:400c:c01::229]) by mx1.freebsd.org (Postfix) with ESMTP id 7B4F21733 for ; Sun, 28 Apr 2013 03:36:31 +0000 (UTC) Received: by mail-ve0-f169.google.com with SMTP id pa12so972430veb.0 for ; Sat, 27 Apr 2013 20:36:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=YyDjiFEzCNLxSfleSKGyEuufo2q7WEe1ZNeJac/7pQM=; b=eG9alGJ8cz47bYagYauC2QfiR4G1y4ATIO1Bl5Rf7GJ6ReLQbWasbogqHQnLHjLoKG VjspTtvaOsVdoDpq/yIS+2T5EwFl42nxNnVD1egEN+pJJuLDtJh1AvytNMlyA7EnaZXy dyYI0MjVnMu+hsiCycNXDnLq262LfYRDJ6woUe8YM2exf3E2j9L4W82UgoMcKukJpTlJ Pt1dYlKQMnBQ03KgTu3iGn/iLwkaK3xQGdKs4On+6ubhyktiyLeQc9o6kywPshOkvadX zRJDkfIL1Z7Od+aHqP/eQ2Bu49O+eKJMI8Z3zt10f9nv4kvQgH2N2/zW0y9L3fs8dik/ vB9w== MIME-Version: 1.0 X-Received: by 10.52.70.206 with SMTP id o14mr17626412vdu.63.1367120191011; Sat, 27 Apr 2013 20:36:31 -0700 (PDT) Received: by 10.220.164.137 with HTTP; Sat, 27 Apr 2013 20:36:30 -0700 (PDT) Date: Sun, 28 Apr 2013 11:36:30 +0800 Message-ID: Subject: comments modification From: =?GB2312?B?z8TKosP3?= To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 03:36:31 -0000 hi all, I found some comments need to be changed in /usr/src/sys/cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.h /* 46 ZFS_IOC_IHNERIT_PROP */ should be changed to /* 46 ZFS_IOC_INHERIT_PROP */ because somewhere in /usr/src/sys/cddl/contrib/opensolaris/uts/common/sys/fs/zfs.h exists the following definition, #define ZFS_IOC_INHERIT_PROP _IOWR('Z', 44, struct zfs_cmd) Have fun! clone. 2013.4.28 From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 07:56:39 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D4EADE8F for ; Sun, 28 Apr 2013 07:56:39 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:150:6101::4]) by mx1.freebsd.org (Postfix) with ESMTP id 986151CAF for ; Sun, 28 Apr 2013 07:56:39 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id B249A190F7 for ; Sun, 28 Apr 2013 09:56:31 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id RByJbR_vFwUN for ; Sun, 28 Apr 2013 09:56:29 +0200 (CEST) Received: from [10.9.8.1] (chello085216226145.chello.sk [85.216.226.145]) by mail.vx.sk (Postfix) with ESMTPSA id DF8EF190EA for ; Sun, 28 Apr 2013 09:56:29 +0200 (CEST) Message-ID: <517CD62C.2030104@FreeBSD.org> Date: Sun, 28 Apr 2013 09:56:28 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 CC: freebsd-fs@freebsd.org Subject: Re: comments modification References: In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 07:56:39 -0000 These comments don't exist in HEAD and STABLE/9 anymore and are going to be removed from STABLE/8 after 8.4-RELEASE. Cheers, mm On 28.4.2013 5:36, 夏盛明 wrote: > hi all, > > I found some comments need to be changed in > /usr/src/sys/cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.h > > /* 46 ZFS_IOC_IHNERIT_PROP */ > > should be changed to > > /* 46 ZFS_IOC_INHERIT_PROP */ > > > because somewhere in > /usr/src/sys/cddl/contrib/opensolaris/uts/common/sys/fs/zfs.h > exists the following definition, > > #define ZFS_IOC_INHERIT_PROP _IOWR('Z', 44, struct > zfs_cmd) > > > > Have fun! > > > clone. > 2013.4.28 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Martin Matuska FreeBSD committer http://blog.vx.sk From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 14:53:56 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0A6653A5 for ; Sun, 28 Apr 2013 14:53:56 +0000 (UTC) (envelope-from olavgg@gmail.com) Received: from mail-la0-x230.google.com (mail-la0-x230.google.com [IPv6:2a00:1450:4010:c03::230]) by mx1.freebsd.org (Postfix) with ESMTP id 88161181E for ; Sun, 28 Apr 2013 14:53:55 +0000 (UTC) Received: by mail-la0-f48.google.com with SMTP id eo20so4691991lab.7 for ; Sun, 28 Apr 2013 07:53:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to:cc :content-type; bh=pNSBFxu/Qum5kq9v/G6P8h+15dt5dqc7gdp1FZIDZLU=; b=cTTKYsswm3r1EEgy0p1l+V+l6/w/RVeTmQu5CNq/oG5AXEwIjTX+Gc0ZgzhBh0k5hA RqpCFHo045Y6u6yiuvg3yLwrqYZCDk/P1cljqqOlvXGO/TDDi6IzvClr+IwKaoddVXTz 5ilgxdWcYeO+6X+nc7xxCW1OnfsiPg4a5PQQDGUpG5BbRJuAZMmwS32WILBeGb84tyCG FLoUYWyVfG5f6Yhp+04h8xX96E63GZovRMTE27mZlh9a1jCBuB3oFphxgvgT2nPs/lJJ GZAJDHs9kxIzIOMV9VLvsRPhhEBv4L8LAoOIKy0cWHYBYgj7W9Bv2SBZfWPre0qoIDil vpdA== MIME-Version: 1.0 X-Received: by 10.152.6.194 with SMTP id d2mr21886569laa.39.1367160833829; Sun, 28 Apr 2013 07:53:53 -0700 (PDT) Received: by 10.112.70.136 with HTTP; Sun, 28 Apr 2013 07:53:53 -0700 (PDT) Date: Sun, 28 Apr 2013 16:53:53 +0200 Message-ID: Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? From: =?ISO-8859-1?Q?Olav_Gr=F8n=E5s_Gjerde?= To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 14:53:56 -0000 The main reason I moved to nfsv4 was that I could export multiple ZFS filesystem with just one export. With nsfv3 I could only export one ZFS filesystem per export. From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 14:58:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 24D22456 for ; Sun, 28 Apr 2013 14:58:07 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta15.emeryville.ca.mail.comcast.net (qmta15.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:44:76:96:27:228]) by mx1.freebsd.org (Postfix) with ESMTP id 0AB4E1845 for ; Sun, 28 Apr 2013 14:58:07 +0000 (UTC) Received: from omta18.emeryville.ca.mail.comcast.net ([76.96.30.74]) by qmta15.emeryville.ca.mail.comcast.net with comcast id VSmk1l0011bwxycAFSy6jR; Sun, 28 Apr 2013 14:58:06 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta18.emeryville.ca.mail.comcast.net with comcast id VSy51l0071t3BNj8eSy5aq; Sun, 28 Apr 2013 14:58:05 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 24ACF73A1B; Sun, 28 Apr 2013 07:58:05 -0700 (PDT) Date: Sun, 28 Apr 2013 07:58:05 -0700 From: Jeremy Chadwick To: Olav =?unknown-8bit?B?R3L4buVz?= Gjerde Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? Message-ID: <20130428145805.GA81766@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367161086; bh=5oGg85TfXOsMZyECLIZgFUddLRjTxWjaBsx/MsZSiHg=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=FKeRVejVMGHzk8xxrRmfZAYln+FJ1+0EeoOFSDN9izeeiUZYFVN7LJLC/dlVsHNMd 9qWl+ICfQsTKyYPvDJhwUZdCl8CWVvzc8u7WOMYllh3O3ZOm5rmDwDhhXBE9sK8Jzq Lm0bUff4rD0dHm5/+rYr6Z4jsI/oe5GaybQIjbvkMGENFI2s3XFaPhAdqQIoFbtLgJ MjFgZKpzryoBr9txz8ZRm8aK/WdwfCDv0UNAQwA9LPIQheFz5xbIXoGXUuDibw4iX3 ivYd7mLSDe5z3fSVpZ5oerVXYbw0ejt4mgnuKF6ng/OdjQoVO5NR02zdMaDKB5XDfk OK2emeBdDYi5g== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 14:58:07 -0000 On Sun, Apr 28, 2013 at 04:53:53PM +0200, Olav Grns Gjerde wrote: > The main reason I moved to nfsv4 was that I could export multiple ZFS > filesystem with just one export. With nsfv3 I could only export one ZFS > filesystem per export. When you say "one/per export", what exactly do you mean? For exporting ZFS filesystems via NFS, I've always used /etc/exports. I've never used the "share" property per ZFS filesystem, because in my experience (at the time -- this was early days of ZFS on FreeBSD) it just flat out didn't work. Using /etc/exports always worked for me. I always liked having all my exported filesystems in one place (/etc/exports), versus UFS ones in /etc/exports + ZFS ones requiring me to use "zfs get ..." and so on. Does it really bother you that much to have multiple lines in /etc/exports (using NFSv3)? -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 15:12:45 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 72E3D731 for ; Sun, 28 Apr 2013 15:12:45 +0000 (UTC) (envelope-from hrs@FreeBSD.org) Received: from mail.allbsd.org (gatekeeper.allbsd.org [IPv6:2001:2f0:104:e001::32]) by mx1.freebsd.org (Postfix) with ESMTP id DE01618A9 for ; Sun, 28 Apr 2013 15:12:44 +0000 (UTC) Received: from alph.d.allbsd.org (p2175-ipbf701funabasi.chiba.ocn.ne.jp [122.25.209.175]) (authenticated bits=128) by mail.allbsd.org (8.14.5/8.14.5) with ESMTP id r3SFCSw8099395 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 29 Apr 2013 00:12:38 +0900 (JST) (envelope-from hrs@FreeBSD.org) Received: from localhost (localhost [127.0.0.1]) (authenticated bits=0) by alph.d.allbsd.org (8.14.5/8.14.5) with ESMTP id r3SFCQtW026054; Mon, 29 Apr 2013 00:12:27 +0900 (JST) (envelope-from hrs@FreeBSD.org) Date: Mon, 29 Apr 2013 00:12:18 +0900 (JST) Message-Id: <20130429.001218.331360293034031466.hrs@allbsd.org> To: rmacklem@uoguelph.ca Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? From: Hiroki Sato In-Reply-To: <1999055521.1124890.1366849234763.JavaMail.root@erie.cs.uoguelph.ca> References: <20130425000253.GA21621@icarus.home.lan> <1999055521.1124890.1366849234763.JavaMail.root@erie.cs.uoguelph.ca> X-PGPkey-fingerprint: BDB3 443F A5DD B3D0 A530 FFD7 4F2C D3D8 2793 CF2D X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Multipart/Signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="--Security_Multipart(Mon_Apr_29_00_12_18_2013_289)--" Content-Transfer-Encoding: 7bit X-Virus-Scanned: clamav-milter 0.97.4 at gatekeeper.allbsd.org X-Virus-Status: Clean X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (mail.allbsd.org [133.31.130.32]); Mon, 29 Apr 2013 00:12:38 +0900 (JST) X-Spam-Status: No, score=-93.3 required=13.0 tests=CONTENT_TYPE_PRESENT, ONLY1HOPDIRECT,RCVD_IN_PBL,RCVD_IN_RP_RNBL,SAMEHELOBY2HOP,USER_IN_WHITELIST autolearn=no version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on gatekeeper.allbsd.org Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 15:12:45 -0000 ----Security_Multipart(Mon_Apr_29_00_12_18_2013_289)-- Content-Type: Text/Plain; charset=utf-8 Content-Transfer-Encoding: base64 UmljayBNYWNrbGVtIDxybWFja2xlbUB1b2d1ZWxwaC5jYT4gd3JvdGUNCiAgaW4gPDE5OTkwNTU1 MjEuMTEyNDg5MC4xMzY2ODQ5MjM0NzYzLkphdmFNYWlsLnJvb3RAZXJpZS5jcy51b2d1ZWxwaC5j YT46DQoNCnJtPiBKZXJlbXkgQ2hhZHdpY2sgd3JvdGU6DQpybT4gPiBPbiBXZWQsIEFwciAyNCwg MjAxMyBhdCAwNDo1NToyMFBNIC0wNzAwLCBNYXJjIEcuIEZvdXJuaWVyIHdyb3RlOg0Kcm0+ID4g Pg0Kcm0+ID4gPiBJIGZvdW5kIHRoaXMgZnJvbSAnMTEgb24gTGludXg6DQpybT4gPiA+IGh0dHA6 Ly9hcmNoaXZlMDkubGludXguY29tL2ZlYXR1cmUvMTM4NDUzDQpybT4gPiA+DQpybT4gPiA+IHRo ZWlyIHN1bW1hcnkgaXMgdGhhdCB0aGVyZSBpc24ndCBhbnkgbWFqb3IgYWR2YW50YWdlIGluIG1v dmluZyB0bw0Kcm0+ID4gPiB2NCwgYnV0IHRoYXQgd2FzIDIgeWVhcnMgYWdvIO+/vSB0aG91Z2h0 cyAvIG9waW5pb25zID8NCnJtPiA+IA0Kcm0+ID4gU3RhcnQgYnkgcmVhZGluZyBuZnN2NCg0KS4N CnJtPiA+IA0Kcm0+ID4gVGhlcmUgYXJlIGFsc28gdGhyZWFkcyBhYm91dCBwZW9wbGUgc2VlaW5n IGltbWVuc2VseSBkZWNyZWFzZWQNCnJtPiA+IHBlcmZvcm1hbmNlIHdpdGggTkZTdjQuIE5vdCBz dXJlIGlmIFJpY2sgaGFzIGhhZCB0aGUgdGltZSB0byBmdWxseQ0Kcm0+ID4gcmVjdGlmeSB0aGlz IChkb24ndCBsZXQgdGhlIFN1YmplY3QgbGluZSBmb29sIHlvdSk6DQpybT4gPiANCnJtPiA+IGh0 dHA6Ly9saXN0cy5mcmVlYnNkLm9yZy9waXBlcm1haWwvZnJlZWJzZC1mcy8yMDExLVNlcHRlbWJl ci8wMTIzODEuaHRtbA0Kcm0+ID4gDQpybT4gQXQgdGhpcyBwb2ludCwgeW91IGNhbiBnZW5lcmFs bHkgYXNzdW1lIHN3aXRjaGluZyB0byBORlN2NCB3aWxsIGJlIGEgcGVyZm9ybWFuY2UNCnJtPiBo aXQgKG9yIHBlcmZvcm1hbmNlIG5ldXRyYWwgYXQgYmVzdCkuIElmIHlvdSBoYXBwZW4gdG8gaGF2 ZSBhIGhpZ2ggZW5kIHNlcnZlcg0Kcm0+IChzdWNoIGFzIGEgTmV0YXBwIG9uZSB0aGF0IGlzIGEg Y2x1c3RlciB0aGF0IGtub3dzIGhvdyB0byBkbyBwTkZTKSwNCnJtPiB0aGUgTkZTdjQuMSBjbGll bnQgaW4gaGVhZCAqbWlnaHQqIGltcHJvdmUgcGVyZm9ybWFuY2UNCnJtPiBiZXlvbmQgd2hhdCBO RlN2MyBnZXRzIGZyb20gdGhlIHNhbWUgc2VydmVyLCBidXQgYXMgSmVyZW15IG5vdGVkLCB5bW12 Lg0Kcm0+IERlbGVnYXRpb25zIChhbmQgdGhlIGV4cGVyaW1lbnRhbCB3b3JrIGluIHByb2plY3Rz L25mc3Y0LXBhY2tyYXRzKSBtYXkgZXZlbnR1YWxseQ0Kcm0+IGNoYW5nZSB0aGF0IGZvciBzb21l IGVudmlyb25tZW50cywgYXMgd2VsbC4gKEkgaGF2ZW4ndCB5ZXQgZml4ZWQgdGhlICJtb3JlIExv b2t1cHMNCnJtPiB0aGFuIE5GU3YzIiBwcm9ibGVtIHJlY2VudGx5IGlkZW50aWZpZWQuKQ0Kcm0+ IA0Kcm0+IFRoZSBtYWluIG5ldyBmZWF0dXJlcyB0aGF0ICptaWdodCogYmUgYSByZWFzb24gZm9y IHlvdSB0byBhZG9wdCBORlN2NCBhdCB0aGlzIHRpbWUgYXJlIChpbWhvKToNCnJtPiAtIGJldHRl ciBzdXBwb3J0IGZvciBieXRlIHJhbmdlIGxvY2tpbmcNCnJtPiAtIE5GU3Y0IEFDTHMNCnJtPiBB IGNvdXBsZSBvZiBvdGhlcnMsIGxpa2UgcmVmZXJyYWxzIGFuZCBzZWN1cml0eSBsYWJlbHMgYXJl IHN0aWxsIHNvbWUgd2F5cw0Kcm0+IChtYXliZSBhIGxvbmcgd2F5cykgZG93biB0aGUgcm9hZC4N Cg0KIEkgbmVlZCBtb3JlIGludmVzdGlnYXRpb24sIGJ1dCBJIHdhcyB0cnlpbmcgdG8gdXNlIE5G U3Y0IGZvciBhIHdoaWxlDQogYW5kIG5vdGljZWQgdGhhdCBteSBORlMgc2VydmVyJ3MgQ1BVIGxv YWQgYmVjYW1lIG11Y2ggaGlnaGVyIGFuZCB0aGUNCiBwZXJmb3JtYW5jZSB3YXMgd29yc2UgdGhh biBORlN2MyB0aG91Z2ggc2ltcGxlIG1pY3JvYmVuY2htYXJrIHNob3dlZA0KIG5vIG11Y2ggZGlm ZmVyZW5jZSBpbiBwZXJmb3JtYW5jZS4gIFRoZSBkZWdyYWRhdGlvbiBzZWVtcyB0byBkZXBlbmQN CiBvbiB0aGUgd29ya2xvYWQuDQoNCi0tIEhpcm9raQ0K ----Security_Multipart(Mon_Apr_29_00_12_18_2013_289)-- Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (FreeBSD) iEYEABECAAYFAlF9PFIACgkQTyzT2CeTzy1C5wCgsJuCnp+ubDEorVOmZZqJX4wK EJ4AmgLdFAD62OAwA1FzydtBv0L6nGCC =DCj9 -----END PGP SIGNATURE----- ----Security_Multipart(Mon_Apr_29_00_12_18_2013_289)---- From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 17:16:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8464A477 for ; Sun, 28 Apr 2013 17:16:54 +0000 (UTC) (envelope-from olavgg@gmail.com) Received: from mail-la0-x235.google.com (mail-la0-x235.google.com [IPv6:2a00:1450:4010:c03::235]) by mx1.freebsd.org (Postfix) with ESMTP id 0EFA91DD7 for ; Sun, 28 Apr 2013 17:16:53 +0000 (UTC) Received: by mail-la0-f53.google.com with SMTP id eg20so4740660lab.26 for ; Sun, 28 Apr 2013 10:16:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=65cYSItCNiEMOeJLQnD10skau5BK0GJscwDO0DGTAyg=; b=IQ8TDMJSeqCtMrg58MnmuDvE/22NLHg8ZVuzNCK7RLW/C21RlkneXL4aNaRnEUrq4I Gtt9Ie7KkEuEVso2PDnCQRTxDhMyrgrT6c8iTGwk16ZBSBC/ohSZUyR9LYrdOpB8KYiG F8AgjhDGzVvhzfWMBHq9nKGjUSK902mitwP0LytXsBQDJrklWBSnV1SahpHp2DDSnc3x AZDGQ398vf42Yh0p/HIa3EO9Sq9nzZPnUyTkOgbLFcjPB8Hf5yiV8fWovCyGmQCpWd4y H8LRIlVh6LXNPTEvpDVWW4NnvUNmG8AcIald615A4tqXqNErwU0gT/5lVK6HjSgTicIp 5ruw== MIME-Version: 1.0 X-Received: by 10.112.164.97 with SMTP id yp1mr13835697lbb.6.1367169003306; Sun, 28 Apr 2013 10:10:03 -0700 (PDT) Received: by 10.112.70.136 with HTTP; Sun, 28 Apr 2013 10:10:03 -0700 (PDT) In-Reply-To: <20130428145805.GA81766@icarus.home.lan> References: <20130428145805.GA81766@icarus.home.lan> Date: Sun, 28 Apr 2013 19:10:03 +0200 Message-ID: Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? From: =?ISO-8859-1?Q?Olav_Gr=F8n=E5s_Gjerde?= To: Jeremy Chadwick Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 17:16:54 -0000 If you have three ZFS filesystems: tank tank/backup tank/home And if you export /tank with nfsv3, you don't really export /tank/backup and /tank/home. You only export the folders, but not it's content I think it has to do with that you cannot export mounted filesystems within one exported filesystem. With nfsv4 you will with only one export of /tank, export all three, including /tank/backup and /tank/home This was an issue 18 months ago, I cannot confirm if it's still an issue. On Sun, Apr 28, 2013 at 4:58 PM, Jeremy Chadwick wrote: > On Sun, Apr 28, 2013 at 04:53:53PM +0200, Olav Grns Gjerde wrote: > > The main reason I moved to nfsv4 was that I could export multiple ZFS > > filesystem with just one export. With nsfv3 I could only export one ZFS > > filesystem per export. > > When you say "one/per export", what exactly do you mean? > > For exporting ZFS filesystems via NFS, I've always used /etc/exports. > I've never used the "share" property per ZFS filesystem, because in my > experience (at the time -- this was early days of ZFS on FreeBSD) it > just flat out didn't work. Using /etc/exports always worked for me. > > I always liked having all my exported filesystems in one place > (/etc/exports), versus UFS ones in /etc/exports + ZFS ones requiring me > to use "zfs get ..." and so on. > > Does it really bother you that much to have multiple lines in > /etc/exports (using NFSv3)? > > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | > From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 20:45:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id F18ED2AF for ; Sun, 28 Apr 2013 20:45:36 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id BBD101437 for ; Sun, 28 Apr 2013 20:45:36 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEANWJfVGDaFvO/2dsb2JhbABTgz2DN7sPgRZ0gh8BAQUjVhsYAgINGQJZBhMZh32sEJAmgSOMLIEXNAeCPIETA5NPg0+RKoMtIIFs X-IronPort-AV: E=Sophos;i="4.87,568,1363147200"; d="scan'208";a="27611648" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 28 Apr 2013 16:45:35 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 50DB5B406C; Sun, 28 Apr 2013 16:45:35 -0400 (EDT) Date: Sun, 28 Apr 2013 16:45:35 -0400 (EDT) From: Rick Macklem To: =?utf-8?Q?Olav_Gr=C3=B8n=C3=A5s_Gjerde?= Message-ID: <1666951718.1200997.1367181935273.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 20:45:37 -0000 Olav Gronas Gjerde wrote: > If you have three ZFS filesystems: > > tank > tank/backup > tank/home > > > And if you export /tank with nfsv3, you don't really export > /tank/backup and /tank/home. > You only export the folders, but not it's content > I think it has to do with that you cannot export mounted filesystems > within one exported filesystem. > > > With nfsv4 you will with only one export of /tank, export all three, > including /tank/backup and /tank/home > > > This was an issue 18 months ago, I cannot confirm if it's still an > issue. > > > > > > On Sun, Apr 28, 2013 at 4:58 PM, Jeremy Chadwick < jdc@koitsu.org > > wrote: > > > > > On Sun, Apr 28, 2013 at 04:53:53PM +0200, Olav Grns Gjerde wrote: > > The main reason I moved to nfsv4 was that I could export multiple > > ZFS > > filesystem with just one export. With nsfv3 I could only export one > > ZFS > > filesystem per export. > > When you say "one/per export", what exactly do you mean? > > For exporting ZFS filesystems via NFS, I've always used /etc/exports. > I've never used the "share" property per ZFS filesystem, because in my > experience (at the time -- this was early days of ZFS on FreeBSD) it > just flat out didn't work. Using /etc/exports always worked for me. > > I always liked having all my exported filesystems in one place > (/etc/exports), versus UFS ones in /etc/exports + ZFS ones requiring > me > to use "zfs get ..." and so on. > > Does it really bother you that much to have multiple lines in > /etc/exports (using NFSv3)? > For /etc/exports, you will still need the three lines for NFSv4. (I don't know anything about the ZFS specific export stuff.) For the client side mount, you only need to mount /tank over NFSv4 in order to see all three (if they are all exported to the client). (You can still do them 3 mounts, but the outcome is the same as one mount for NFSv4.) rick > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 20:58:56 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id A35E4AD8; Sun, 28 Apr 2013 20:58:56 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 5B7CE14BD; Sun, 28 Apr 2013 20:58:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAL6MfVGDaFvO/2dsb2JhbABTFoMngze7D4EWdIIfAQEBBAECIFYbDgoCAg0ZAiohAQ0GE4gWDKwEkCaBI41DATMHgjyBEwOUXIJCgSaQBIMtIIFs X-IronPort-AV: E=Sophos;i="4.87,568,1363147200"; d="scan'208";a="25863092" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 28 Apr 2013 16:58:48 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id F1A15B404C; Sun, 28 Apr 2013 16:58:48 -0400 (EDT) Date: Sun, 28 Apr 2013 16:58:48 -0400 (EDT) From: Rick Macklem To: Hiroki Sato Message-ID: <1097539914.1201151.1367182728935.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20130429.001218.331360293034031466.hrs@allbsd.org> Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 20:58:56 -0000 Hiroki Sato wrote: > Rick Macklem wrote > in > <1999055521.1124890.1366849234763.JavaMail.root@erie.cs.uoguelph.ca>: >=20 > rm> Jeremy Chadwick wrote: > rm> > On Wed, Apr 24, 2013 at 04:55:20PM -0700, Marc G. Fournier > wrote: > rm> > > > rm> > > I found this from '11 on Linux: > rm> > > http://archive09.linux.com/feature/138453 > rm> > > > rm> > > their summary is that there isn't any major advantage in > moving to > rm> > > v4, but that was 2 years ago =EF=BF=BD thoughts / opinions ? > rm> > > rm> > Start by reading nfsv4(4). > rm> > > rm> > There are also threads about people seeing immensely decreased > rm> > performance with NFSv4. Not sure if Rick has had the time to > fully > rm> > rectify this (don't let the Subject line fool you): > rm> > > rm> > > http://lists.freebsd.org/pipermail/freebsd-fs/2011-September/012381.html > rm> > > rm> At this point, you can generally assume switching to NFSv4 will be > a performance > rm> hit (or performance neutral at best). If you happen to have a high > end server > rm> (such as a Netapp one that is a cluster that knows how to do > pNFS), > rm> the NFSv4.1 client in head *might* improve performance > rm> beyond what NFSv3 gets from the same server, but as Jeremy noted, > ymmv. > rm> Delegations (and the experimental work in projects/nfsv4-packrats) > may eventually > rm> change that for some environments, as well. (I haven't yet fixed > the "more Lookups > rm> than NFSv3" problem recently identified.) > rm> > rm> The main new features that *might* be a reason for you to adopt > NFSv4 at this time are (imho): > rm> - better support for byte range locking > rm> - NFSv4 ACLs > rm> A couple of others, like referrals and security labels are still > some ways > rm> (maybe a long ways) down the road. >=20 > I need more investigation, but I was trying to use NFSv4 for a while > and noticed that my NFS server's CPU load became much higher and the > performance was worse than NFSv3 though simple microbenchmark showed > no much difference in performance. The degradation seems to depend > on the workload. >=20 Well, someone recently noticed that builds result in far more Lookup operations on the server for NFSv4. I plan on poking at that one to see if I can fix it. There will also be extra overheads for the Open/Close operations, which don't exist on NFSv3 and maintain Windows style oplocks. (The only way to avoid most of these is by enabling delegations.) Beyond that, there has been recent work on reducing cpu overheads for the DRC. This isn't NFSv4 specific, but the patch at: http://people.freebsd.org/~rmacklem/drc4.patch seems to have worked well for testing done by wollman@. Hopefully a refined version of this, using code written by ivoras@ can make it into head in the next few weeks. (I don't know if this patch will be relevent, but the DRC seemed to be the main CPU hog for Garrett Wollman's load and at least a couple of others.) There is also the nfsd thread FH affinity patch that ken@ recently committe= d. Adding NFSv4 support for that is doable, but will take a while. This appare= ntly affects read performance for servers using ZFS (I don't know if CPU usage g= oes up noticably without it?). rick > -- Hiroki From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 21:04:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BDF8AE4A for ; Sun, 28 Apr 2013 21:04:47 +0000 (UTC) (envelope-from olavgg@gmail.com) Received: from mail-la0-x22c.google.com (mail-la0-x22c.google.com [IPv6:2a00:1450:4010:c03::22c]) by mx1.freebsd.org (Postfix) with ESMTP id 48DCD14FC for ; Sun, 28 Apr 2013 21:04:47 +0000 (UTC) Received: by mail-la0-f44.google.com with SMTP id ed20so4907740lab.3 for ; Sun, 28 Apr 2013 14:04:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=sZNwgISBr0IjlfU7Hba+kbzJZZ4SKrj6Dqj17Og8fs8=; b=mbAvcx5hC6BxPuIRKPgt03fdSX5pFuj1MZ4AGNJtdUVub++B6v0ufRcizEvH55bE+h bucR+3uObZYaEYvsJ3Rp0eLUtik17z5lXbWXF8H5VrFsOOanze4FXRHK9ww0BCO/3u4i MASbNvoplH3WgyxnTsyheRMDV9f5LU6f+4Pm4b/U5WZA9oJBbTTc6sXP3obAcmiqV5Fs kkz1a0jDP0psSXtBLTs3/8kqTWR3lreZZUR3Nwbq4t2ITuHPQgVmGaF4EE9FDtRi30eA 3uWCNG6n0CQPG3BiR5lulkE9+cla+wBGu07oReFvqwwoPMvrnT2UVTEO6K5U9quwh8lt KHJA== MIME-Version: 1.0 X-Received: by 10.112.138.5 with SMTP id qm5mr25000852lbb.2.1367183086243; Sun, 28 Apr 2013 14:04:46 -0700 (PDT) Received: by 10.112.70.136 with HTTP; Sun, 28 Apr 2013 14:04:46 -0700 (PDT) In-Reply-To: <1666951718.1200997.1367181935273.JavaMail.root@erie.cs.uoguelph.ca> References: <1666951718.1200997.1367181935273.JavaMail.root@erie.cs.uoguelph.ca> Date: Sun, 28 Apr 2013 23:04:46 +0200 Message-ID: Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? From: =?ISO-8859-1?Q?Olav_Gr=F8n=E5s_Gjerde?= To: Rick Macklem Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 21:04:47 -0000 That's correct Rick! I now remember that it was the nfsv4 client that would read the mounted filesystems inside an exported nfs share, not the server which still needs all the filesystems listed in /etc/exports. Sorry for the confusion. On Sun, Apr 28, 2013 at 10:45 PM, Rick Macklem wrote: > Olav Gronas Gjerde wrote: > > If you have three ZFS filesystems: > > > > tank > > tank/backup > > tank/home > > > > > > And if you export /tank with nfsv3, you don't really export > > /tank/backup and /tank/home. > > You only export the folders, but not it's content > > I think it has to do with that you cannot export mounted filesystems > > within one exported filesystem. > > > > > > With nfsv4 you will with only one export of /tank, export all three, > > including /tank/backup and /tank/home > > > > > > This was an issue 18 months ago, I cannot confirm if it's still an > > issue. > > > > > > > > > > > > On Sun, Apr 28, 2013 at 4:58 PM, Jeremy Chadwick < jdc@koitsu.org > > > wrote: > > > > > > > > > > On Sun, Apr 28, 2013 at 04:53:53PM +0200, Olav Grns Gjerde wrote: > > > The main reason I moved to nfsv4 was that I could export multiple > > > ZFS > > > filesystem with just one export. With nsfv3 I could only export one > > > ZFS > > > filesystem per export. > > > > When you say "one/per export", what exactly do you mean? > > > > For exporting ZFS filesystems via NFS, I've always used /etc/exports. > > I've never used the "share" property per ZFS filesystem, because in my > > experience (at the time -- this was early days of ZFS on FreeBSD) it > > just flat out didn't work. Using /etc/exports always worked for me. > > > > I always liked having all my exported filesystems in one place > > (/etc/exports), versus UFS ones in /etc/exports + ZFS ones requiring > > me > > to use "zfs get ..." and so on. > > > > Does it really bother you that much to have multiple lines in > > /etc/exports (using NFSv3)? > > > For /etc/exports, you will still need the three lines for NFSv4. > (I don't know anything about the ZFS specific export stuff.) > > For the client side mount, you only need to mount /tank over NFSv4 > in order to see all three (if they are all exported to the client). > (You can still do them 3 mounts, but the outcome is the same as one > mount for NFSv4.) > > rick > > > -- > > | Jeremy Chadwick jdc@koitsu.org | > > | UNIX Systems Administrator http://jdc.koitsu.org/ | > > | Mountain View, CA, US | > > | Making life hard for others since 1977. PGP 4BD6C0CB | > From owner-freebsd-fs@FreeBSD.ORG Sun Apr 28 22:25:27 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 948D8DCC for ; Sun, 28 Apr 2013 22:25:27 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qe0-f42.google.com (mail-qe0-f42.google.com [209.85.128.42]) by mx1.freebsd.org (Postfix) with ESMTP id 5A14C1763 for ; Sun, 28 Apr 2013 22:25:27 +0000 (UTC) Received: by mail-qe0-f42.google.com with SMTP id 1so1413403qee.29 for ; Sun, 28 Apr 2013 15:25:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=hZ4cfEkbu+kczt8Qum0QSBdxji1ySBL1T4ZayPYhWQs=; b=0JbTCL5xDpI90RLWXn9hc2eyIxdybWmvj+OelCq2j/dpv1MUcN/ltlLXKkw5YGCZtB obF+OHB5R/zjmumFmqePAruc7tvopb8aEH1CS7lUDwTY0eYtdKZvEAJRT8ULXPZ/jpkB xVI+fwgdPkiqDquEQpueT9YPHKXu1DPTObO1Hku7fmCDh4RwPw2Nht718jL1kAVtSHuJ ws9rptYdMQCEeBgsLlXYGjMhmB4WH9KOVrYLKDSZ09fNX1j3O4QngureWgMTfVxtad0R qYs+T1yfZx2EyBCrkFcLoyFWOmz2cC/C/T7KwOpf9kywnskYHrCUR0+/yyy1QLakt1xa iAJQ== MIME-Version: 1.0 X-Received: by 10.224.147.83 with SMTP id k19mr31140666qav.72.1367187920481; Sun, 28 Apr 2013 15:25:20 -0700 (PDT) Received: by 10.49.1.44 with HTTP; Sun, 28 Apr 2013 15:25:20 -0700 (PDT) Received: by 10.49.1.44 with HTTP; Sun, 28 Apr 2013 15:25:20 -0700 (PDT) In-Reply-To: <20130428145805.GA81766@icarus.home.lan> References: <20130428145805.GA81766@icarus.home.lan> Date: Sun, 28 Apr 2013 15:25:20 -0700 Message-ID: Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? From: Freddie Cash To: Jeremy Chadwick Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems , =?UTF-8?B?T2xhdiBHcsO4bsOlcyBHamVyZGU=?= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Apr 2013 22:25:27 -0000 cat /etc/zfs/exports Works the same for UFS and ZFS. :) At least on FreeBSD. Solaris-based OSes have more in-depth support for NFS-exported ZFS. On 2013-04-28 7:58 AM, "Jeremy Chadwick" wrote: > On Sun, Apr 28, 2013 at 04:53:53PM +0200, Olav Grns Gjerde wrote: > > The main reason I moved to nfsv4 was that I could export multiple ZFS > > filesystem with just one export. With nsfv3 I could only export one ZFS > > filesystem per export. > > When you say "one/per export", what exactly do you mean? > > For exporting ZFS filesystems via NFS, I've always used /etc/exports. > I've never used the "share" property per ZFS filesystem, because in my > experience (at the time -- this was early days of ZFS on FreeBSD) it > just flat out didn't work. Using /etc/exports always worked for me. > > I always liked having all my exported filesystems in one place > (/etc/exports), versus UFS ones in /etc/exports + ZFS ones requiring me > to use "zfs get ..." and so on. > > Does it really bother you that much to have multiple lines in > /etc/exports (using NFSv3)? > > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 01:57:02 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2695F32E for ; Mon, 29 Apr 2013 01:57:02 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta01.emeryville.ca.mail.comcast.net (qmta01.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:16]) by mx1.freebsd.org (Postfix) with ESMTP id 0A1651EF4 for ; Mon, 29 Apr 2013 01:57:01 +0000 (UTC) Received: from omta13.emeryville.ca.mail.comcast.net ([76.96.30.52]) by qmta01.emeryville.ca.mail.comcast.net with comcast id VdSF1l00617UAYkA1dx1MU; Mon, 29 Apr 2013 01:57:01 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta13.emeryville.ca.mail.comcast.net with comcast id Vdx01l00A1t3BNj8Zdx0j7; Mon, 29 Apr 2013 01:57:00 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 226C373A1B; Sun, 28 Apr 2013 18:57:00 -0700 (PDT) Date: Sun, 28 Apr 2013 18:57:00 -0700 From: Jeremy Chadwick To: Olav =?unknown-8bit?B?R3L4buVz?= Gjerde Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? Message-ID: <20130429015700.GA91179@icarus.home.lan> References: <20130428145805.GA81766@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367200621; bh=77/9X86b/ZlDpok7MTEWI0PSs3yHWYa3ycCT/Qq6y9Y=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=TeWimF4DJdaOEiZf/mg4nRCEP2KyxTLiS4kQ2Tm56r5jqKhdNwjxsVtZ6wChZBCDp Z969j4SVYMkES9kIwbu/NUmJkWEHQb/7ZBRQDs1nUeR0cvbdaScmsEaGVqi+oedpeG bnFKsCYTD/z+V11zTfUYlYsCAazVHPQZc9xc8+4Fec41axxEnZVdyZPRJiIYDBvV25 gRb3Aim41u8a3qaJ6Pvt9IwmWcbcfm5QAWQp50SiKRBSgzqTBPSGaTmT1V1lM6kQq4 G688tUgY3CRuGFDNrlD/eUgXhjsHjL0Gzjmn9qkXK5RidWD8Xe6NfZfDEAJsiKvigZ Tb23ZssX+L6tg== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 01:57:02 -0000 On Sun, Apr 28, 2013 at 07:10:03PM +0200, Olav Grns Gjerde wrote: > If you have three ZFS filesystems: > tank > tank/backup > tank/home > > And if you export /tank with nfsv3, you don't really export /tank/backup > and /tank/home. > You only export the folders, but not it's content > I think it has to do with that you cannot export mounted filesystems within > one exported filesystem. > > With nfsv4 you will with only one export of /tank, export all three, > including /tank/backup and /tank/home > > This was an issue 18 months ago, I cannot confirm if it's still an issue. Maybe I'm still misunderstanding, but it sounds like what you want (for NFSv3) is the -alldirs option, e.g.: /tank -alldirs 10.0.0.20 Which would allow 10.0.0.20 to mount /tank, /tank/backup, /tank/home, or whatever else under /tank, with NFSv3. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 02:04:22 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1B2F23D9 for ; Mon, 29 Apr 2013 02:04:22 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta01.emeryville.ca.mail.comcast.net (qmta01.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:16]) by mx1.freebsd.org (Postfix) with ESMTP id F3A541F17 for ; Mon, 29 Apr 2013 02:04:21 +0000 (UTC) Received: from omta23.emeryville.ca.mail.comcast.net ([76.96.30.90]) by qmta01.emeryville.ca.mail.comcast.net with comcast id Vd1Y1l00M1wfjNsA1e4M3f; Mon, 29 Apr 2013 02:04:21 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta23.emeryville.ca.mail.comcast.net with comcast id Ve4L1l00x1t3BNj8je4M5K; Mon, 29 Apr 2013 02:04:21 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id CF1C073A1B; Sun, 28 Apr 2013 19:04:20 -0700 (PDT) Date: Sun, 28 Apr 2013 19:04:20 -0700 From: Jeremy Chadwick To: Freddie Cash Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? Message-ID: <20130429020420.GB91179@icarus.home.lan> References: <20130428145805.GA81766@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367201061; bh=dFkXObqyjpP8pJIS9FH82G8vgRUCaIy35875AxiPXRY=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=nOzlqYyRtUTld8LT/fa3Vk+E9KYjJcDyYbc782jV3+IM5E2+x5GDfRHqejGRU8FwJ gaPte664uRbM1G/jywPCekjcSn2XeCa0MgBh+sbciI4CK3x5PSvHNbayFLHo+PygnA 9i4f6VM9ScYrbV4wCRkI7yc1Ert2lsqHxW0s87LshryA4JLKpHNZkhi5DnK+9v5eI0 y3e0+ubtPBKOckH0mP0WS5/gU4egf+Q4BiR31p/0kuiY8kWT3C+U0jS6rEml04A2Cw /2uWUsCdfc9nCrJQMJGGU19ENINSucXbC6wpauzyrj9dEXmWZUvexmtt3a5jc0L402 Pq2X9xS893/lg== Cc: FreeBSD Filesystems , Olav =?unknown-8bit?B?R3LDuG7DpXM=?= Gjerde X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 02:04:22 -0000 On Sun, Apr 28, 2013 at 03:25:20PM -0700, Freddie Cash wrote: > cat /etc/zfs/exports > > Works the same for UFS and ZFS. :) > > At least on FreeBSD. Solaris-based OSes have more in-depth support for > NFS-exported ZFS. That file is simply the contents of all the "share" properties per ZFS filesystem. Look for ZFS_EXPORTS_PATH in src/cddl and work backwards to the fsshare_main() function in fsshare.c. That didn't used to work properly, like I said in my mail (last line): "For exporting ZFS filesystems via NFS, I've always used /etc/exports. I've never used the "share" property per ZFS filesystem, because in my experience (at the time -- this was early days of ZFS on FreeBSD) ..." I'm glad to see whatever was broken has been fixed/addressed/whatever, but at one time -- and for a fairly lengthy duration -- it did not work. Not to mention, you're not supposed to mess with that file (see FILE_HEADER in fsshare.c) by hand. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 06:44:40 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7034EA9E for ; Mon, 29 Apr 2013 06:44:40 +0000 (UTC) (envelope-from ajit.jain@cloudbyte.com) Received: from mail-ob0-x22d.google.com (mail-ob0-x22d.google.com [IPv6:2607:f8b0:4003:c01::22d]) by mx1.freebsd.org (Postfix) with ESMTP id 412981A48 for ; Mon, 29 Apr 2013 06:44:40 +0000 (UTC) Received: by mail-ob0-f173.google.com with SMTP id xn12so5194206obc.18 for ; Sun, 28 Apr 2013 23:44:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=PxIU+q320BmilJlbgPZK4WI7F9O/7l99BvGrI+F5glo=; b=XocrqeY/9nkw6ssT4JmKNxrtaqV9nQydYRKRH+amaNf9GXFfNggMD+t+F/9CVOH9EM HGo0d5Ai9TBY+GMqI1jv0G81iVFb6GvUkEmuh5amUJmobl+636BU1mpQJ5lfDV1OIvh9 qPWNGrt3eHtw8EZbWpxmuvRZCjpyXriGixYlt9LeL0IpUmYay+kAHocHwZoGFgbASz10 6W430YoBnvWP8EbJgoTSrxZ8igqrbFpBa/By9qHu6jipyx1nn7XCPxn3hBo2GeGTof4f Vq9WG3qpehLSMdwHalVb12zH3MNw1WO1ys++EKnAtHTh9lewBQSzX+ESR9ORf3zdMilr UOow== X-Received: by 10.60.173.196 with SMTP id bm4mr12253459oec.108.1367217879786; Sun, 28 Apr 2013 23:44:39 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.142.106 with HTTP; Sun, 28 Apr 2013 23:44:19 -0700 (PDT) From: Ajit Jain Date: Mon, 29 Apr 2013 12:14:19 +0530 Message-ID: Subject: seeing data corruption with zfs trim functionality To: freebsd-fs@freebsd.org, Ajit Jain X-Gm-Message-State: ALoCoQnF1JXb0mGiO1l9eoCroGbLMmgMcLkdbC2q9CIPd1R4hPt1XCGY2O+qLBV4z91IV7+gqSuo Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 06:44:40 -0000 Hi, I am running zfs with trim functionality (ported from head). Seeing data corruption when running iotest* with multiple threads (never saw data corruption with single thread). The patches merged to add trim support are as follows: 1. 240868 (zfs trim patch) 2. 230053 and 245252 (block device driver trim support) 3. 239655 (fix an issue in patch 230053) I am "NOT" seeing data corruption in the following cases: 1. Running iotest with single thread (Trim is enabled at entire io stack). 2. Trim is enabled at zfs layer but disable at driver layer i.e. delete method is set to NONE (even with multiple threads). Since patch 240868 alone was not working as I pulled in additional zfs trim patches 244155, 244187, 244188, 248572 (however I am not using separate L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the same issue. Issue: After some time running with multiple thread write system call return sometimes with EIO or 122 (checksum error) error code. I looked at GEOM code a bit I think it already has the trim (DELETE) command support. Still I am doubtful if I have pulled in all required patches in the entire I/O stack. I am using a LSI SAS HBA card to connect to the SSD, firmware seems to claim the support for trim. *iotest: non standard freebsd FreeBSD utility, which creates files and does I/O on the files and can be invoked in single/multithread mode to do the I/O. Thanks. ajit From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 08:21:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 31E51E4C for ; Mon, 29 Apr 2013 08:21:46 +0000 (UTC) (envelope-from prvs=1831672f64=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id CC2851E1A for ; Mon, 29 Apr 2013 08:21:45 +0000 (UTC) Received: from r2d2 ([46.65.172.4]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50003533867.msg for ; Mon, 29 Apr 2013 09:21:36 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 29 Apr 2013 09:21:36 +0100 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 46.65.172.4 X-Return-Path: prvs=1831672f64=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> From: "Steven Hartland" To: "Ajit Jain" , References: Subject: Re: seeing data corruption with zfs trim functionality Date: Mon, 29 Apr 2013 09:22:06 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 08:21:46 -0000 ----- Original Message ----- From: "Ajit Jain" > I am running zfs with trim functionality (ported from head). Seeing data > corruption when running iotest* with multiple threads (never saw data > corruption with single thread). > > The patches merged to add trim support are as follows: > 1. 240868 (zfs trim patch) > 2. 230053 and 245252 (block device driver trim support) > 3. 239655 (fix an issue in patch 230053) > > I am "NOT" seeing data corruption in the following cases: > 1. Running iotest with single thread (Trim is enabled at entire io stack). > 2. Trim is enabled at zfs layer but disable at driver layer i.e. delete > method is set to NONE (even with multiple threads). > > > Since patch 240868 alone was not working as I pulled in additional zfs trim > patches 244155, 244187, 244188, 248572 (however I am not using separate > L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the > same issue. > > Issue: After some time running with multiple thread write system call > return sometimes with EIO or 122 (checksum error) error code. > > I looked at GEOM code a bit I think it already has the trim (DELETE) > command support. Still I am doubtful if I have pulled in all required > patches in the entire I/O stack. > > I am using a LSI SAS HBA card to connect to the SSD, firmware seems to > claim the support for trim. > > *iotest: non standard freebsd FreeBSD utility, which creates files and does > I/O on the files and can be invoked in single/multithread mode to do the > I/O. What version are you porting the changes to? What SSD are you using? What LSI controller are you using? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 08:25:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D32D1E8; Mon, 29 Apr 2013 08:25:58 +0000 (UTC) (envelope-from cloundcoder@gmail.com) Received: from mail-ve0-x22c.google.com (mail-ve0-x22c.google.com [IPv6:2607:f8b0:400c:c01::22c]) by mx1.freebsd.org (Postfix) with ESMTP id 85F811E6D; Mon, 29 Apr 2013 08:25:58 +0000 (UTC) Received: by mail-ve0-f172.google.com with SMTP id db10so2854536veb.31 for ; Mon, 29 Apr 2013 01:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=k5pmziWo5ESdfUcMwZaj/4GQyNCIv+XCHUafm/18ZlA=; b=CXvwgX5rbor6hBZWh0x8cSFwoUSLunH+iUR6juWYotxTxhz3tsqjT/Gqh03H0SZY81 XMb//oGF/KF102iaXBJUoCvkv4N1kCGIBiBJHb84eMHTYMiqK8tNGFWP/r0neEGZd+pJ EoPxFM60tfypX7lZK40kN70xDbQ95WHuJWw2H1h71RvDejNhBVmfOua7BtlJAMNLR5NQ 3lFmBL7EzGtekugi8GX3csRC03DBel5WeqB/soJuekGcEmYDw5weWYK7zoWhIbcCPjet PqJDlja+QRuBoRADByO3grNRfzEl7ojgw2nTeV59gR3VL0I75efinbJX40RKm/bKLxCL 5XRg== MIME-Version: 1.0 X-Received: by 10.52.174.196 with SMTP id bu4mr27534953vdc.117.1367223958040; Mon, 29 Apr 2013 01:25:58 -0700 (PDT) Received: by 10.220.164.137 with HTTP; Mon, 29 Apr 2013 01:25:57 -0700 (PDT) In-Reply-To: References: <517CD603.9050501@FreeBSD.org> Date: Mon, 29 Apr 2013 16:25:57 +0800 Message-ID: Subject: Re: comments modification From: shengming xia To: Martin Matuska , freebsd-fs@freebsd.org Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 08:25:58 -0000 I check the HEAD version , they actually exists , Revision *249319* /head/sys/cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.h 251 44, /* 46 ZFS_IOC_IHNERIT_PROP */ 307 46, /* 44 ZFS_IOC_IHNERIT_PROP */ > 2013/4/28 Martin Matuska > >> Thank you for your e-mail, >> >> these comments don't exist in HEAD and STABLE/9 anymore and are going to >> be removed from STABLE/8 after 8.4-RELEASE. >> >> Cheers, >> mm >> >> On 28.4.2013 5:36, =CF=C4=CA=A2=C3=F7 wrote: >> > hi all, >> > >> > I found some comments need to be changed in >> > /usr/src/sys/cddl/contrib/opensolaris/common/zfs/zfs_ioctl_compat.h >> > >> > /* 46 ZFS_IOC_IHNERIT_PROP */ >> > >> > should be changed to >> > >> > /* 46 ZFS_IOC_INHERIT_PROP */ >> > >> > >> > because somewhere in >> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/sys/fs/zfs.h >> > exists the following definition, >> > >> > #define ZFS_IOC_INHERIT_PROP _IOWR('Z', 44, struct >> > zfs_cmd) >> > >> > >> > >> > Have fun! >> > >> > >> > clone. >> > 2013.4.28 >> > _______________________________________________ >> > freebsd-fs@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> >> -- >> Martin Matuska >> FreeBSD committer >> http://blog.vx.sk >> >> > From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 10:20:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id B9E7BC49 for ; Mon, 29 Apr 2013 10:20:31 +0000 (UTC) (envelope-from ajit.jain@cloudbyte.com) Received: from mail-oa0-f43.google.com (mail-oa0-f43.google.com [209.85.219.43]) by mx1.freebsd.org (Postfix) with ESMTP id 854151670 for ; Mon, 29 Apr 2013 10:20:31 +0000 (UTC) Received: by mail-oa0-f43.google.com with SMTP id k7so5862329oag.16 for ; Mon, 29 Apr 2013 03:20:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type:x-gm-message-state; bh=3kxPwJZA+4acCqMlMS1+sAERDT/+MZjlseiuo8CWgi0=; b=Xx9gHvHwjWLzW1mEuy3hdDG3xPJ8S5yllVzKhoZcq6V6HKftcV/TdqmjGw7V0Ax0tI oy/6PmcTOI+xWQiIuqrONwj+FCAqSq2P9psDzD2ZHwNteX7L7eS8K9rjToNiPGRz3Qhw H4vIMGCfGA4tg0W04RX5aq2Bx4cM8+v41qaTz4HGkBj9Ha480yicZGVtz5E2T09O89ZI jjqzdX/F+KDkFxOT+Y4LGM+hAlNf2evjVoyK+AHELzVVF9HkSwEIY81IYZOcE/avrr7s bmhMjJ40lPkFRpwTjs/4Rz2sKSWu6v97i34Rst577lLYTuehVmbo+2Km54b1eDs5ij7t OYwA== X-Received: by 10.60.65.68 with SMTP id v4mr23906886oes.13.1367230830595; Mon, 29 Apr 2013 03:20:30 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.142.106 with HTTP; Mon, 29 Apr 2013 03:20:10 -0700 (PDT) In-Reply-To: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> From: Ajit Jain Date: Mon, 29 Apr 2013 15:50:10 +0530 Message-ID: Subject: Re: seeing data corruption with zfs trim functionality To: Steven Hartland Content-Type: multipart/mixed; boundary=001a11c1cb68fc2f5e04db7d3bc3 X-Gm-Message-State: ALoCoQlOmiTs8HrTVKwx1HXwgqYEbCIhB8I+rlTYbjQweuQ05+YiiBxzWe+oxIXeqDlS0S4XQGPU X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 10:20:31 -0000 --001a11c1cb68fc2f5e04db7d3bc3 Content-Type: text/plain; charset=ISO-8859-1 Hi Steven, Freebsd Version: 9 SSD: Seagate SSD, complete smartctl output is attached with the mail. Not sure if I could provide the SSD information that you were looking for. If not, could you please tell me command (if any) to get the information. LSI card: mpslsi0@pci0:2:0:0: class=0x010700 card=0x30801000 chip=0x00721000 rev=0x03 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]' class = mass storage subclass = SAS Complete pciconf -lv output is attached with mail. thanks ajit On Mon, Apr 29, 2013 at 1:52 PM, Steven Hartland wrote: > ----- Original Message ----- From: "Ajit Jain" > > > I am running zfs with trim functionality (ported from head). Seeing data >> corruption when running iotest* with multiple threads (never saw data >> corruption with single thread). >> >> The patches merged to add trim support are as follows: >> 1. 240868 (zfs trim patch) >> 2. 230053 and 245252 (block device driver trim support) >> 3. 239655 (fix an issue in patch 230053) >> >> I am "NOT" seeing data corruption in the following cases: >> 1. Running iotest with single thread (Trim is enabled at entire io stack). >> 2. Trim is enabled at zfs layer but disable at driver layer i.e. delete >> method is set to NONE (even with multiple threads). >> >> >> Since patch 240868 alone was not working as I pulled in additional zfs >> trim >> patches 244155, 244187, 244188, 248572 (however I am not using separate >> L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the >> same issue. >> >> Issue: After some time running with multiple thread write system call >> return sometimes with EIO or 122 (checksum error) error code. >> >> I looked at GEOM code a bit I think it already has the trim (DELETE) >> command support. Still I am doubtful if I have pulled in all required >> patches in the entire I/O stack. >> >> I am using a LSI SAS HBA card to connect to the SSD, firmware seems to >> claim the support for trim. >> >> *iotest: non standard freebsd FreeBSD utility, which creates files and >> does >> I/O on the files and can be invoked in single/multithread mode to do the >> I/O. >> > > What version are you porting the changes to? > > What SSD are you using? > > What LSI controller are you using? > > Regards > Steve > > ==============================**================== > This e.mail is private and confidential between Multiplay (UK) Ltd. and > the person or entity to whom it is addressed. In the event of misdirection, > the recipient is prohibited from using, copying, printing or otherwise > disseminating it or any information contained in it. > In the event of misdirection, illegible or incomplete transmission please > telephone +44 845 868 1337 > or return the E.mail to postmaster@multiplay.co.uk. > > --001a11c1cb68fc2f5e04db7d3bc3 Content-Type: application/octet-stream; name=pciconf Content-Disposition: attachment; filename=pciconf Content-Transfer-Encoding: base64 X-Attachment-Id: f_hg3hwowz0 aG9zdGIwQHBjaTA6MDowOjA6CWNsYXNzPTB4MDYwMDAwIGNhcmQ9MHg1MDAwMTQ1OCBjaGlwPTB4 NWExNDEwMDIgcmV2PTB4MDIgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnQVRJIFRlY2hub2xv Z2llcyBJbmMnCiAgICBkZXZpY2UgICAgID0gJ1JEODkwIFBDSSB0byBQQ0kgYnJpZGdlIChleHRl cm5hbCBnZngwIHBvcnQgQiknCiAgICBjbGFzcyAgICAgID0gYnJpZGdlCiAgICBzdWJjbGFzcyAg ID0gSE9TVC1QQ0kKcGNpYjFAcGNpMDowOjI6MDoJY2xhc3M9MHgwNjA0MDAgY2FyZD0weDUwMDAx NDU4IGNoaXA9MHg1YTE2MTAwMiByZXY9MHgwMCBoZHI9MHgwMQogICAgdmVuZG9yICAgICA9ICdB VEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAgICAgPSAnUkQ4OTAgUENJIHRvIFBDSSBi cmlkZ2UgKFBDSSBleHByZXNzIGdwcCBwb3J0IEIpJwogICAgY2xhc3MgICAgICA9IGJyaWRnZQog ICAgc3ViY2xhc3MgICA9IFBDSS1QQ0kKcGNpYjJAcGNpMDowOjM6MDoJY2xhc3M9MHgwNjA0MDAg Y2FyZD0weDUwMDAxNDU4IGNoaXA9MHg1YTE3MTAwMiByZXY9MHgwMCBoZHI9MHgwMQogICAgdmVu ZG9yICAgICA9ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAgICAgPSAnUkQ4OTAg UENJIHRvIFBDSSBicmlkZ2UgKFBDSSBleHByZXNzIGdwcCBwb3J0IEMpJwogICAgY2xhc3MgICAg ICA9IGJyaWRnZQogICAgc3ViY2xhc3MgICA9IFBDSS1QQ0kKcGNpYjNAcGNpMDowOjQ6MDoJY2xh c3M9MHgwNjA0MDAgY2FyZD0weDUwMDAxNDU4IGNoaXA9MHg1YTE4MTAwMiByZXY9MHgwMCBoZHI9 MHgwMQogICAgdmVuZG9yICAgICA9ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAg ICAgPSAnUkQ4OTAgUENJIHRvIFBDSSBicmlkZ2UgKFBDSSBleHByZXNzIGdwcCBwb3J0IEQpJwog ICAgY2xhc3MgICAgICA9IGJyaWRnZQogICAgc3ViY2xhc3MgICA9IFBDSS1QQ0kKcGNpYjRAcGNp MDowOjk6MDoJY2xhc3M9MHgwNjA0MDAgY2FyZD0weDUwMDAxNDU4IGNoaXA9MHg1YTFjMTAwMiBy ZXY9MHgwMCBoZHI9MHgwMQogICAgdmVuZG9yICAgICA9ICdBVEkgVGVjaG5vbG9naWVzIEluYycK ICAgIGRldmljZSAgICAgPSAnUkQ4OTAgUENJIHRvIFBDSSBicmlkZ2UgKFBDSSBleHByZXNzIGdw cCBwb3J0IEgpJwogICAgY2xhc3MgICAgICA9IGJyaWRnZQogICAgc3ViY2xhc3MgICA9IFBDSS1Q Q0kKcGNpYjVAcGNpMDowOjEwOjA6CWNsYXNzPTB4MDYwNDAwIGNhcmQ9MHg1MDAwMTQ1OCBjaGlw PTB4NWExZDEwMDIgcmV2PTB4MDAgaGRyPTB4MDEKICAgIHZlbmRvciAgICAgPSAnQVRJIFRlY2hu b2xvZ2llcyBJbmMnCiAgICBkZXZpY2UgICAgID0gJ1JEODkwIFBDSSB0byBQQ0kgYnJpZGdlIChl eHRlcm5hbCBnZngxIHBvcnQgQSknCiAgICBjbGFzcyAgICAgID0gYnJpZGdlCiAgICBzdWJjbGFz cyAgID0gUENJLVBDSQphaGNpMEBwY2kwOjA6MTc6MDoJY2xhc3M9MHgwMTAxOGYgY2FyZD0weGIw MDIxNDU4IGNoaXA9MHg0MzkwMTAwMiByZXY9MHg0MCBoZHI9MHgwMAogICAgdmVuZG9yICAgICA9 ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAgICAgPSAnU0I3eDAvU0I4eDAvU0I5 eDAgU0FUQSBDb250cm9sbGVyIFtJREUgbW9kZV0nCiAgICBjbGFzcyAgICAgID0gbWFzcyBzdG9y YWdlCiAgICBzdWJjbGFzcyAgID0gQVRBCm9oY2kwQHBjaTA6MDoxODowOgljbGFzcz0weDBjMDMx MCBjYXJkPTB4NTAwNDE0NTggY2hpcD0weDQzOTcxMDAyIHJldj0weDAwIGhkcj0weDAwCiAgICB2 ZW5kb3IgICAgID0gJ0FUSSBUZWNobm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdTQjd4 MC9TQjh4MC9TQjl4MCBVU0IgT0hDSTAgQ29udHJvbGxlcicKICAgIGNsYXNzICAgICAgPSBzZXJp YWwgYnVzCiAgICBzdWJjbGFzcyAgID0gVVNCCmVoY2kwQHBjaTA6MDoxODoyOgljbGFzcz0weDBj MDMyMCBjYXJkPTB4NTAwNDE0NTggY2hpcD0weDQzOTYxMDAyIHJldj0weDAwIGhkcj0weDAwCiAg ICB2ZW5kb3IgICAgID0gJ0FUSSBUZWNobm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdT Qjd4MC9TQjh4MC9TQjl4MCBVU0IgRUhDSSBDb250cm9sbGVyJwogICAgY2xhc3MgICAgICA9IHNl cmlhbCBidXMKICAgIHN1YmNsYXNzICAgPSBVU0IKb2hjaTFAcGNpMDowOjE5OjA6CWNsYXNzPTB4 MGMwMzEwIGNhcmQ9MHg1MDA0MTQ1OCBjaGlwPTB4NDM5NzEwMDIgcmV2PTB4MDAgaGRyPTB4MDAK ICAgIHZlbmRvciAgICAgPSAnQVRJIFRlY2hub2xvZ2llcyBJbmMnCiAgICBkZXZpY2UgICAgID0g J1NCN3gwL1NCOHgwL1NCOXgwIFVTQiBPSENJMCBDb250cm9sbGVyJwogICAgY2xhc3MgICAgICA9 IHNlcmlhbCBidXMKICAgIHN1YmNsYXNzICAgPSBVU0IKZWhjaTFAcGNpMDowOjE5OjI6CWNsYXNz PTB4MGMwMzIwIGNhcmQ9MHg1MDA0MTQ1OCBjaGlwPTB4NDM5NjEwMDIgcmV2PTB4MDAgaGRyPTB4 MDAKICAgIHZlbmRvciAgICAgPSAnQVRJIFRlY2hub2xvZ2llcyBJbmMnCiAgICBkZXZpY2UgICAg ID0gJ1NCN3gwL1NCOHgwL1NCOXgwIFVTQiBFSENJIENvbnRyb2xsZXInCiAgICBjbGFzcyAgICAg ID0gc2VyaWFsIGJ1cwogICAgc3ViY2xhc3MgICA9IFVTQgpub25lMEBwY2kwOjA6MjA6MDoJY2xh c3M9MHgwYzA1MDAgY2FyZD0weDAwMDAwMDAwIGNoaXA9MHg0Mzg1MTAwMiByZXY9MHg0MiBoZHI9 MHgwMAogICAgdmVuZG9yICAgICA9ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAg ICAgPSAnU0J4MDAgU01CdXMgQ29udHJvbGxlcicKICAgIGNsYXNzICAgICAgPSBzZXJpYWwgYnVz CiAgICBzdWJjbGFzcyAgID0gU01CdXMKYXRhcGNpMEBwY2kwOjA6MjA6MToJY2xhc3M9MHgwMTAx OGEgY2FyZD0weDUwMDIxNDU4IGNoaXA9MHg0MzljMTAwMiByZXY9MHg0MCBoZHI9MHgwMAogICAg dmVuZG9yICAgICA9ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAgICAgPSAnU0I3 eDAvU0I4eDAvU0I5eDAgSURFIENvbnRyb2xsZXInCiAgICBjbGFzcyAgICAgID0gbWFzcyBzdG9y YWdlCiAgICBzdWJjbGFzcyAgID0gQVRBCm5vbmUxQHBjaTA6MDoyMDoyOgljbGFzcz0weDA0MDMw MCBjYXJkPTB4YTEzMjE0NTggY2hpcD0weDQzODMxMDAyIHJldj0weDQwIGhkcj0weDAwCiAgICB2 ZW5kb3IgICAgID0gJ0FUSSBUZWNobm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdTQngw MCBBemFsaWEgKEludGVsIEhEQSknCiAgICBjbGFzcyAgICAgID0gbXVsdGltZWRpYQogICAgc3Vi Y2xhc3MgICA9IEhEQQppc2FiMEBwY2kwOjA6MjA6MzoJY2xhc3M9MHgwNjAxMDAgY2FyZD0weDQz OWQxMDAyIGNoaXA9MHg0MzlkMTAwMiByZXY9MHg0MCBoZHI9MHgwMAogICAgdmVuZG9yICAgICA9 ICdBVEkgVGVjaG5vbG9naWVzIEluYycKICAgIGRldmljZSAgICAgPSAnU0I3eDAvU0I4eDAvU0I5 eDAgTFBDIGhvc3QgY29udHJvbGxlcicKICAgIGNsYXNzICAgICAgPSBicmlkZ2UKICAgIHN1YmNs YXNzICAgPSBQQ0ktSVNBCnBjaWI2QHBjaTA6MDoyMDo0OgljbGFzcz0weDA2MDQwMSBjYXJkPTB4 MDAwMDAwMDAgY2hpcD0weDQzODQxMDAyIHJldj0weDQwIGhkcj0weDAxCiAgICB2ZW5kb3IgICAg ID0gJ0FUSSBUZWNobm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdTQngwMCBQQ0kgdG8g UENJIEJyaWRnZScKICAgIGNsYXNzICAgICAgPSBicmlkZ2UKICAgIHN1YmNsYXNzICAgPSBQQ0kt UENJCm9oY2kyQHBjaTA6MDoyMDo1OgljbGFzcz0weDBjMDMxMCBjYXJkPTB4NTAwNDE0NTggY2hp cD0weDQzOTkxMDAyIHJldj0weDAwIGhkcj0weDAwCiAgICB2ZW5kb3IgICAgID0gJ0FUSSBUZWNo bm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdTQjd4MC9TQjh4MC9TQjl4MCBVU0IgT0hD STIgQ29udHJvbGxlcicKICAgIGNsYXNzICAgICAgPSBzZXJpYWwgYnVzCiAgICBzdWJjbGFzcyAg ID0gVVNCCnBjaWI3QHBjaTA6MDoyMTowOgljbGFzcz0weDA2MDQwMCBjYXJkPTB4MDAwMDEwMDIg Y2hpcD0weDQzYTAxMDAyIHJldj0weDAwIGhkcj0weDAxCiAgICB2ZW5kb3IgICAgID0gJ0FUSSBU ZWNobm9sb2dpZXMgSW5jJwogICAgZGV2aWNlICAgICA9ICdTQjcwMC9TQjgwMCBQQ0kgdG8gUENJ IGJyaWRnZSAoUENJRSBwb3J0IDApJwogICAgY2xhc3MgICAgICA9IGJyaWRnZQogICAgc3ViY2xh c3MgICA9IFBDSS1QQ0kKb2hjaTNAcGNpMDowOjIyOjA6CWNsYXNzPTB4MGMwMzEwIGNhcmQ9MHg1 MDA0MTQ1OCBjaGlwPTB4NDM5NzEwMDIgcmV2PTB4MDAgaGRyPTB4MDAKICAgIHZlbmRvciAgICAg PSAnQVRJIFRlY2hub2xvZ2llcyBJbmMnCiAgICBkZXZpY2UgICAgID0gJ1NCN3gwL1NCOHgwL1NC OXgwIFVTQiBPSENJMCBDb250cm9sbGVyJwogICAgY2xhc3MgICAgICA9IHNlcmlhbCBidXMKICAg IHN1YmNsYXNzICAgPSBVU0IKZWhjaTJAcGNpMDowOjIyOjI6CWNsYXNzPTB4MGMwMzIwIGNhcmQ9 MHg1MDA0MTQ1OCBjaGlwPTB4NDM5NjEwMDIgcmV2PTB4MDAgaGRyPTB4MDAKICAgIHZlbmRvciAg ICAgPSAnQVRJIFRlY2hub2xvZ2llcyBJbmMnCiAgICBkZXZpY2UgICAgID0gJ1NCN3gwL1NCOHgw L1NCOXgwIFVTQiBFSENJIENvbnRyb2xsZXInCiAgICBjbGFzcyAgICAgID0gc2VyaWFsIGJ1cwog ICAgc3ViY2xhc3MgICA9IFVTQgpob3N0YjFAcGNpMDowOjI0OjA6CWNsYXNzPTB4MDYwMDAwIGNh cmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwMDEwMjIgcmV2PTB4MDAgaGRyPTB4MDAKICAgIHZlbmRv ciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2aWNlcyBbQU1EXScKICAgIGRldmljZSAgICAgPSAn RmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rpb24gMCcKICAgIGNsYXNzICAgICAgPSBicmlkZ2UK ICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQpob3N0YjJAcGNpMDowOjI0OjE6CWNsYXNzPTB4MDYw MDAwIGNhcmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwMTEwMjIgcmV2PTB4MDAgaGRyPTB4MDAKICAg IHZlbmRvciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2aWNlcyBbQU1EXScKICAgIGRldmljZSAg ICAgPSAnRmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rpb24gMScKICAgIGNsYXNzICAgICAgPSBi cmlkZ2UKICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQpob3N0YjNAcGNpMDowOjI0OjI6CWNsYXNz PTB4MDYwMDAwIGNhcmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwMjEwMjIgcmV2PTB4MDAgaGRyPTB4 MDAKICAgIHZlbmRvciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2aWNlcyBbQU1EXScKICAgIGRl dmljZSAgICAgPSAnRmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rpb24gMicKICAgIGNsYXNzICAg ICAgPSBicmlkZ2UKICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQpob3N0YjRAcGNpMDowOjI0OjM6 CWNsYXNzPTB4MDYwMDAwIGNhcmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwMzEwMjIgcmV2PTB4MDAg aGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2aWNlcyBbQU1EXScK ICAgIGRldmljZSAgICAgPSAnRmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rpb24gMycKICAgIGNs YXNzICAgICAgPSBicmlkZ2UKICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQpob3N0YjVAcGNpMDow OjI0OjQ6CWNsYXNzPTB4MDYwMDAwIGNhcmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwNDEwMjIgcmV2 PTB4MDAgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2aWNlcyBb QU1EXScKICAgIGRldmljZSAgICAgPSAnRmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rpb24gNCcK ICAgIGNsYXNzICAgICAgPSBicmlkZ2UKICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQpob3N0YjZA cGNpMDowOjI0OjU6CWNsYXNzPTB4MDYwMDAwIGNhcmQ9MHgwMDAwMDAwMCBjaGlwPTB4MTYwNTEw MjIgcmV2PTB4MDAgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnQWR2YW5jZWQgTWljcm8gRGV2 aWNlcyBbQU1EXScKICAgIGRldmljZSAgICAgPSAnRmFtaWx5IDE1aCBQcm9jZXNzb3IgRnVuY3Rp b24gNScKICAgIGNsYXNzICAgICAgPSBicmlkZ2UKICAgIHN1YmNsYXNzICAgPSBIT1NULVBDSQp2 Z2FwY2kwQHBjaTA6MTowOjA6CWNsYXNzPTB4MDMwMDAwIGNhcmQ9MHgxMTYwMTlkYSBjaGlwPTB4 MGE2NTEwZGUgcmV2PTB4YTIgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnblZpZGlhIENvcnBv cmF0aW9uJwogICAgZGV2aWNlICAgICA9ICdHVDIxOCBbR2VGb3JjZSAyMTBdJwogICAgY2xhc3Mg ICAgICA9IGRpc3BsYXkKICAgIHN1YmNsYXNzICAgPSBWR0EKbm9uZTJAcGNpMDoxOjA6MToJY2xh c3M9MHgwNDAzMDAgY2FyZD0weDExNjAxOWRhIGNoaXA9MHgwYmUzMTBkZSByZXY9MHhhMSBoZHI9 MHgwMAogICAgdmVuZG9yICAgICA9ICduVmlkaWEgQ29ycG9yYXRpb24nCiAgICBkZXZpY2UgICAg ID0gJ0hpZ2ggRGVmaW5pdGlvbiBBdWRpbyBDb250cm9sbGVyJwogICAgY2xhc3MgICAgICA9IG11 bHRpbWVkaWEKICAgIHN1YmNsYXNzICAgPSBIREEKbXBzbHNpMEBwY2kwOjI6MDowOgljbGFzcz0w eDAxMDcwMCBjYXJkPTB4MzA4MDEwMDAgY2hpcD0weDAwNzIxMDAwIHJldj0weDAzIGhkcj0weDAw CiAgICB2ZW5kb3IgICAgID0gJ0xTSSBMb2dpYyAvIFN5bWJpb3MgTG9naWMnCiAgICBkZXZpY2Ug ICAgID0gJ1NBUzIwMDggUENJLUV4cHJlc3MgRnVzaW9uLU1QVCBTQVMtMiBbRmFsY29uXScKICAg IGNsYXNzICAgICAgPSBtYXNzIHN0b3JhZ2UKICAgIHN1YmNsYXNzICAgPSBTQVMKeGhjaTBAcGNp MDozOjA6MDoJY2xhc3M9MHgwYzAzMzAgY2FyZD0weDUwMDcxNDU4IGNoaXA9MHg3MDIzMWI2ZiBy ZXY9MHgwMSBoZHI9MHgwMAogICAgY2xhc3MgICAgICA9IHNlcmlhbCBidXMKICAgIHN1YmNsYXNz ICAgPSBVU0IKcmUwQHBjaTA6NDowOjA6CWNsYXNzPTB4MDIwMDAwIGNhcmQ9MHhlMDAwMTQ1OCBj aGlwPTB4ODE2ODEwZWMgcmV2PTB4MDYgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnUmVhbHRl ayBTZW1pY29uZHVjdG9yIENvLiwgTHRkLicKICAgIGRldmljZSAgICAgPSAnUlRMODExMS84MTY4 QiBQQ0kgRXhwcmVzcyBHaWdhYml0IEV0aGVybmV0IGNvbnRyb2xsZXInCiAgICBjbGFzcyAgICAg ID0gbmV0d29yawogICAgc3ViY2xhc3MgICA9IGV0aGVybmV0CnhoY2kxQHBjaTA6NTowOjA6CWNs YXNzPTB4MGMwMzMwIGNhcmQ9MHg1MDA3MTQ1OCBjaGlwPTB4NzAyMzFiNmYgcmV2PTB4MDEgaGRy PTB4MDAKICAgIGNsYXNzICAgICAgPSBzZXJpYWwgYnVzCiAgICBzdWJjbGFzcyAgID0gVVNCCmZ3 b2hjaTBAcGNpMDo2OjE0OjA6CWNsYXNzPTB4MGMwMDEwIGNhcmQ9MHgxMDAwMTQ1OCBjaGlwPTB4 MzA0NDExMDYgcmV2PTB4YzAgaGRyPTB4MDAKICAgIHZlbmRvciAgICAgPSAnVklBIFRlY2hub2xv Z2llcywgSW5jLicKICAgIGRldmljZSAgICAgPSAnVlQ2MzA2LzcvOCBbRmlyZSBJSShNKV0gSUVF RSAxMzk0IE9IQ0kgQ29udHJvbGxlcicKICAgIGNsYXNzICAgICAgPSBzZXJpYWwgYnVzCiAgICBz dWJjbGFzcyAgID0gRmlyZVdpcmUK --001a11c1cb68fc2f5e04db7d3bc3 Content-Type: application/octet-stream; name="smartclt.out" Content-Disposition: attachment; filename="smartclt.out" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hg3hx03d1 c21hcnRjdGwgNS40MiAyMDExLTEwLTIwIHIzNDU4IFtDbG91ZEJ5dGUgMS4xLUVsYXN0aXN0b3Ig YW1kNjRdIChsb2NhbCBidWlsZCkKQ29weXJpZ2h0IChDKSAyMDAyLTExIGJ5IEJydWNlIEFsbGVu LCBodHRwOi8vc21hcnRtb250b29scy5zb3VyY2Vmb3JnZS5uZXQKClZlbmRvcjogICAgICAgICAg ICAgICBTRUFHQVRFIApQcm9kdWN0OiAgICAgICAgICAgICAgU1QxMDBGTTAwMDIgICAgIApSZXZp c2lvbjogICAgICAgICAgICAgMDAwMwpVc2VyIENhcGFjaXR5OiAgICAgICAgMTAwLDAzMCwyNDIs ODE2IGJ5dGVzIFsxMDAgR0JdCkxvZ2ljYWwgYmxvY2sgc2l6ZTogICA1MTIgYnl0ZXMKTG9naWNh bCBVbml0IGlkOiAgICAgIDB4NTAwMGM1MDAzMDEwNTFmNwpTZXJpYWwgbnVtYmVyOiAgICAgICAg WjEyMTE3MjkwMDAwODIyMTUwWjMKRGV2aWNlIHR5cGU6ICAgICAgICAgIGRpc2sKVHJhbnNwb3J0 IHByb3RvY29sOiAgIFNBUwpMb2NhbCBUaW1lIGlzOiAgICAgICAgTW9uIEFwciAyOSAxNTo0NDo0 MyAyMDEzIElTVApEZXZpY2Ugc3VwcG9ydHMgU01BUlQgYW5kIGlzIEVuYWJsZWQKVGVtcGVyYXR1 cmUgV2FybmluZyBFbmFibGVkClNNQVJUIEhlYWx0aCBTdGF0dXM6IE9LClNTIE1lZGlhIHVzZWQg ZW5kdXJhbmNlIGluZGljYXRvcjogMCUKCkN1cnJlbnQgRHJpdmUgVGVtcGVyYXR1cmU6ICAgICAz NyBDCkRyaXZlIFRyaXAgVGVtcGVyYXR1cmU6ICAgICAgICA2NSBDCk1hbnVmYWN0dXJlZCBpbiB3 ZWVrIDI2IG9mIHllYXIgMjAxMgpTcGVjaWZpZWQgY3ljbGUgY291bnQgb3ZlciBkZXZpY2UgbGlm ZXRpbWU6ICAxMDAwMApBY2N1bXVsYXRlZCBzdGFydC1zdG9wIGN5Y2xlczogIDI0CmRlZmVjdCBs aXN0IGZvcm1hdCA2IHVua25vd24KRWxlbWVudHMgaW4gZ3Jvd24gZGVmZWN0IGxpc3Q6IDAKVmVu ZG9yIChTZWFnYXRlKSBjYWNoZSBpbmZvcm1hdGlvbgogIEJsb2NrcyBzZW50IHRvIGluaXRpYXRv ciA9IDI2MDg2NzI4OAogIEJsb2NrcyByZWNlaXZlZCBmcm9tIGluaXRpYXRvciA9IDI3MjYwNjky NzUKICBCbG9ja3MgcmVhZCBmcm9tIGNhY2hlIGFuZCBzZW50IHRvIGluaXRpYXRvciA9IDQ4ODQy CiAgTnVtYmVyIG9mIHJlYWQgYW5kIHdyaXRlIGNvbW1hbmRzIHdob3NlIHNpemUgPD0gc2VnbWVu dCBzaXplID0gMjEwMzc2NzI3CiAgTnVtYmVyIG9mIHJlYWQgYW5kIHdyaXRlIGNvbW1hbmRzIHdo b3NlIHNpemUgPiBzZWdtZW50IHNpemUgPSAwClZlbmRvciAoU2VhZ2F0ZS9IaXRhY2hpKSBmYWN0 b3J5IGluZm9ybWF0aW9uCiAgbnVtYmVyIG9mIGhvdXJzIHBvd2VyZWQgdXAgPSA3ODIuNTcKICBu dW1iZXIgb2YgbWludXRlcyB1bnRpbCBuZXh0IGludGVybmFsIFNNQVJUIHRlc3QgPSA0MAoKRXJy b3IgY291bnRlciBsb2c6CiAgICAgICAgICAgRXJyb3JzIENvcnJlY3RlZCBieSAgICAgICAgICAg VG90YWwgICBDb3JyZWN0aW9uICAgICBHaWdhYnl0ZXMgICAgVG90YWwKICAgICAgICAgICAgICAg RUNDICAgICAgICAgIHJlcmVhZHMvICAgIGVycm9ycyAgIGFsZ29yaXRobSAgICAgIHByb2Nlc3Nl ZCAgICB1bmNvcnJlY3RlZAogICAgICAgICAgIGZhc3QgfCBkZWxheWVkICAgcmV3cml0ZXMgIGNv cnJlY3RlZCAgaW52b2NhdGlvbnMgICBbMTBeOSBieXRlc10gIGVycm9ycwpyZWFkOiAgICAgICAg ICAwICAgICAgICAwICAgICAgICAgMCAgICAgICAgIDAgICAgICAgICAgMCAgICAgICAgMTMzLjU2 NCAgICAgICAgICAgMAp3cml0ZTogICAgICAgICAwICAgICAgICAwICAgICAgICAgMCAgICAgICAg IDAgICAgICAgICAgMCAgICAgIDEyMzkxLjIyMSAgICAgICAgICAgMAoKTm9uLW1lZGl1bSBlcnJv ciBjb3VudDogICAgICAgIDAKTm8gc2VsZi10ZXN0cyBoYXZlIGJlZW4gbG9nZ2VkCkxvbmcgKGV4 dGVuZGVkKSBTZWxmIFRlc3QgZHVyYXRpb246IDMyNzY3IHNlY29uZHMgWzU0Ni4xIG1pbnV0ZXNd Cg== --001a11c1cb68fc2f5e04db7d3bc3-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 10:43:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id A8CC36A4 for ; Mon, 29 Apr 2013 10:43:31 +0000 (UTC) (envelope-from prvs=1831672f64=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 15EE61779 for ; Mon, 29 Apr 2013 10:43:30 +0000 (UTC) Received: from r2d2 ([46.65.172.4]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50003534978.msg for ; Mon, 29 Apr 2013 11:43:26 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 29 Apr 2013 11:43:26 +0100 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 46.65.172.4 X-Return-Path: prvs=1831672f64=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <3BFB7F45D8AF46879238F99E6CAEA348@multiplay.co.uk> From: "Steven Hartland" To: "Ajit Jain" References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> Subject: Re: seeing data corruption with zfs trim functionality Date: Mon, 29 Apr 2013 11:44:00 +0100 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0940_01CE44CE.D6F204E0" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 10:43:31 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_0940_01CE44CE.D6F204E0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Ooo a SAS SSD. Can you:- 1. Test with a SATA ssd after applying the attached patches. 2. Rerun from a current kernel and see if you still see corruption, this will eliminate the possibility of any missing patches. ----- Original Message -----=20 From: Ajit Jain=20 To: Steven Hartland=20 Cc: freebsd-fs=20 Sent: Monday, April 29, 2013 11:20 AM Subject: Re: seeing data corruption with zfs trim functionality Hi Steven, Freebsd Version: 9 SSD: Seagate SSD, complete smartctl output is attached with the mail. Not sure if I could provide the SSD information that you were looking for. If not, could you please tell me command (if any) to get the information. LSI card: =20 mpslsi0@pci0:2:0:0: class=3D0x010700 card=3D0x30801000 chip=3D0x00721000 rev=3D0x03 hdr=3D0x00 vendor =3D 'LSI Logic / Symbios Logic' device =3D 'SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]' class =3D mass storage subclass =3D SAS Complete pciconf -lv output is attached with mail. thanks ajit On Mon, Apr 29, 2013 at 1:52 PM, Steven Hartland wrote: ----- Original Message ----- From: "Ajit Jain" I am running zfs with trim functionality (ported from head). Seeing data corruption when running iotest* with multiple threads (never saw data corruption with single thread). The patches merged to add trim support are as follows: 1. 240868 (zfs trim patch) 2. 230053 and 245252 (block device driver trim support) 3. 239655 (fix an issue in patch 230053) I am "NOT" seeing data corruption in the following cases: 1. Running iotest with single thread (Trim is enabled at entire io stack). 2. Trim is enabled at zfs layer but disable at driver layer i.e. delete method is set to NONE (even with multiple threads). Since patch 240868 alone was not working as I pulled in additional zfs trim patches 244155, 244187, 244188, 248572 (however I am not using separate L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the same issue. Issue: After some time running with multiple thread write system call return sometimes with EIO or 122 (checksum error) error code. I looked at GEOM code a bit I think it already has the trim (DELETE) command support. Still I am doubtful if I have pulled in all required patches in the entire I/O stack. I am using a LSI SAS HBA card to connect to the SSD, firmware seems to claim the support for trim. *iotest: non standard freebsd FreeBSD utility, which creates files and does I/O on the files and can be invoked in single/multithread mode to do the I/O. What version are you porting the changes to? What SSD are you using? What LSI controller are you using? Regards Steve =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.=20 In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.=20 In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. ------=_NextPart_000_0940_01CE44CE.D6F204E0 Content-Type: application/octet-stream; name="cam-scsi_da-reprobe.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="cam-scsi_da-reprobe.patch" Fix probe in progress check in dareprobe=0A= --- sys/cam/scsi/scsi_da.c.orig2 2013-04-27 23:53:11.000000000 +0000=0A= +++ sys/cam/scsi/scsi_da.c 2013-04-28 16:05:48.187091065 +0000=0A= @@ -3208,7 +3244,7 @@=0A= softc =3D (struct da_softc *)periph->softc;=0A= =0A= /* Probe in progress; don't interfere. */=0A= - if ((softc->flags & DA_FLAG_PROBED) =3D=3D 0)=0A= + if (softc->state !=3D DA_STATE_NORMAL)=0A= return;=0A= =0A= status =3D cam_periph_acquire(periph);=0A= ------=_NextPart_000_0940_01CE44CE.D6F204E0 Content-Type: application/octet-stream; name="cam-scsi_da-enable-trim.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="cam-scsi_da-enable-trim.patch" Enable ATA TRIM support choice by autodetection and correct method names = after=0A= increasing the priority of ATA TRIM=0A= Index: sys/cam/scsi/scsi_da.c=0A= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= --- sys/cam/scsi/scsi_da.c (revision 249941)=0A= +++ sys/cam/scsi/scsi_da.c (working copy)=0A= @@ -133,14 +133,14 @@=0A= DA_DELETE_WS16,=0A= DA_DELETE_WS10,=0A= DA_DELETE_ZERO,=0A= - DA_DELETE_MIN =3D DA_DELETE_UNMAP,=0A= + DA_DELETE_MIN =3D DA_DELETE_ATA_TRIM,=0A= DA_DELETE_MAX =3D DA_DELETE_ZERO=0A= } da_delete_methods;=0A= =0A= static const char *da_delete_method_names[] =3D=0A= - { "NONE", "DISABLE", "UNMAP", "ATA_TRIM", "WS16", "WS10", "ZERO" };=0A= + { "NONE", "DISABLE", "ATA_TRIM", "UNMAP", "WS16", "WS10", "ZERO" };=0A= static const char *da_delete_method_desc[] =3D=0A= - { "NONE", "DISABLED", "UNMAP", "ATA TRIM", "WRITE SAME(16) with = UNMAP",=0A= + { "NONE", "DISABLED", "ATA TRIM", "UNMAP", "WRITE SAME(16) with = UNMAP",=0A= "WRITE SAME(10) with UNMAP", "ZERO" };=0A= =0A= /* Offsets into our private area for storing information */=0A= ------=_NextPart_000_0940_01CE44CE.D6F204E0 Content-Type: application/octet-stream; name="cam-scsi_da-probe-order.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="cam-scsi_da-probe-order.patch" Update probe flow so that devices with lbp can also disable disksort.=0A= =0A= Ensure that delete_available is reset so re-probes after a media change,=0A= to one with different delete characteristics, will result in the correct=0A= methods being flagged as available.=0A= =0A= Make all ccb state changes use a consistent flow:=0A= * free()=0A= * xpt_release_ccb()=0A= * softc->state =3D =0A= * xpt_schedule()=0A= --- sys/cam/scsi/scsi_da.c.orig3 2013-04-28 17:26:38.778757822 +0000=0A= +++ sys/cam/scsi/scsi_da.c 2013-04-28 18:24:20.917876197 +0000=0A= @@ -2398,7 +2398,7 @@=0A= =0A= if (!scsi_vpd_supported_page(periph, SVPD_BLOCK_LIMITS)) {=0A= /* Not supported skip to next probe */=0A= - softc->state =3D DA_STATE_PROBE_ATA;=0A= + softc->state =3D DA_STATE_PROBE_BDC;=0A= goto skipstate;=0A= }=0A= =0A= @@ -2745,9 +2745,9 @@=0A= * with the short version of the command.=0A= */=0A= if (maxsector =3D=3D 0xffffffff) {=0A= - softc->state =3D DA_STATE_PROBE_RC16;=0A= free(rdcap, M_SCSIDA);=0A= xpt_release_ccb(done_ccb);=0A= + softc->state =3D DA_STATE_PROBE_RC16;=0A= xpt_schedule(periph, priority);=0A= return;=0A= }=0A= @@ -2849,9 +2849,9 @@=0A= (error_code =3D=3D SSD_CURRENT_ERROR) &&=0A= (sense_key =3D=3D SSD_KEY_ILLEGAL_REQUEST)))) {=0A= softc->flags &=3D ~DA_FLAG_CAN_RC16;=0A= - softc->state =3D DA_STATE_PROBE_RC;=0A= free(rdcap, M_SCSIDA);=0A= xpt_release_ccb(done_ccb);=0A= + softc->state =3D DA_STATE_PROBE_RC;=0A= xpt_schedule(periph, priority);=0A= return;=0A= } else=0A= @@ -2908,36 +2908,39 @@=0A= &softc->sysctl_task);=0A= xpt_announce_periph(periph, announce_buf);=0A= =0A= - if (lbp) {=0A= - /*=0A= - * Based on older SBC-3 spec revisions=0A= - * any of the UNMAP methods "may" be=0A= - * available via LBP given this flag so=0A= - * we flag all of them as availble and=0A= - * then remove those which further=0A= - * probes confirm aren't available=0A= - * later.=0A= - *=0A= - * We could also check readcap(16) p_type=0A= - * flag to exclude one or more invalid=0A= - * write same (X) types here=0A= - */=0A= - dadeleteflag(softc, DA_DELETE_WS16, 1);=0A= - dadeleteflag(softc, DA_DELETE_WS10, 1);=0A= - dadeleteflag(softc, DA_DELETE_ZERO, 1);=0A= - dadeleteflag(softc, DA_DELETE_UNMAP, 1);=0A= -=0A= - softc->state =3D DA_STATE_PROBE_LBP;=0A= - xpt_release_ccb(done_ccb);=0A= - xpt_schedule(periph, priority);=0A= - return;=0A= - }=0A= } else {=0A= xpt_print(periph->path, "fatal error, "=0A= "could not acquire reference count\n");=0A= }=0A= }=0A= =0A= + /* Ensure re-probe doesn't see old delete. */=0A= + softc->delete_available =3D 0;=0A= + if (lbp) {=0A= + /*=0A= + * Based on older SBC-3 spec revisions=0A= + * any of the UNMAP methods "may" be=0A= + * available via LBP given this flag so=0A= + * we flag all of them as availble and=0A= + * then remove those which further=0A= + * probes confirm aren't available=0A= + * later.=0A= + *=0A= + * We could also check readcap(16) p_type=0A= + * flag to exclude one or more invalid=0A= + * write same (X) types here=0A= + */=0A= + dadeleteflag(softc, DA_DELETE_WS16, 1);=0A= + dadeleteflag(softc, DA_DELETE_WS10, 1);=0A= + dadeleteflag(softc, DA_DELETE_ZERO, 1);=0A= + dadeleteflag(softc, DA_DELETE_UNMAP, 1);=0A= +=0A= + xpt_release_ccb(done_ccb);=0A= + softc->state =3D DA_STATE_PROBE_LBP;=0A= + xpt_schedule(periph, priority);=0A= + return;=0A= + }=0A= +=0A= xpt_release_ccb(done_ccb);=0A= softc->state =3D DA_STATE_PROBE_BDC;=0A= xpt_schedule(periph, priority);=0A= @@ -2965,8 +2968,8 @@=0A= =0A= if (lbp->flags & SVPD_LBP_UNMAP) {=0A= free(lbp, M_SCSIDA);=0A= - softc->state =3D DA_STATE_PROBE_BLK_LIMITS;=0A= xpt_release_ccb(done_ccb);=0A= + softc->state =3D DA_STATE_PROBE_BLK_LIMITS;=0A= xpt_schedule(periph, priority);=0A= return;=0A= }=0A= @@ -2995,7 +2998,7 @@=0A= =0A= free(lbp, M_SCSIDA);=0A= xpt_release_ccb(done_ccb);=0A= - softc->state =3D DA_STATE_PROBE_ATA;=0A= + softc->state =3D DA_STATE_PROBE_BDC;=0A= xpt_schedule(periph, priority);=0A= return;=0A= }=0A= @@ -3058,7 +3061,7 @@=0A= =0A= free(block_limits, M_SCSIDA);=0A= xpt_release_ccb(done_ccb);=0A= - softc->state =3D DA_STATE_PROBE_ATA;=0A= + softc->state =3D DA_STATE_PROBE_BDC;=0A= xpt_schedule(periph, priority);=0A= return;=0A= }=0A= @@ -3095,8 +3098,8 @@=0A= }=0A= =0A= free(bdc, M_SCSIDA);=0A= - softc->state =3D DA_STATE_PROBE_ATA;=0A= xpt_release_ccb(done_ccb);=0A= + softc->state =3D DA_STATE_PROBE_ATA;=0A= xpt_schedule(periph, priority);=0A= return;=0A= }=0A= ------=_NextPart_000_0940_01CE44CE.D6F204E0-- From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 10:51:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id BC7A4992 for ; Mon, 29 Apr 2013 10:51:44 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta07.emeryville.ca.mail.comcast.net (qmta07.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:64]) by mx1.freebsd.org (Postfix) with ESMTP id A28ED17D0 for ; Mon, 29 Apr 2013 10:51:44 +0000 (UTC) Received: from omta04.emeryville.ca.mail.comcast.net ([76.96.30.35]) by qmta07.emeryville.ca.mail.comcast.net with comcast id Vmrk1l0010lTkoCA7mrkm8; Mon, 29 Apr 2013 10:51:44 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta04.emeryville.ca.mail.comcast.net with comcast id Vmrj1l0091t3BNj8QmrjcB; Mon, 29 Apr 2013 10:51:44 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 4E46173A1B; Mon, 29 Apr 2013 03:51:43 -0700 (PDT) Date: Mon, 29 Apr 2013 03:51:43 -0700 From: Jeremy Chadwick To: Steven Hartland Subject: Re: seeing data corruption with zfs trim functionality Message-ID: <20130429105143.GA1492@icarus.home.lan> References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367232704; bh=xOS5dMGiCw68TkvanQU6+ecDhIC1l+jRqvg0LYeCaw8=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=tUsKocLCGKMIjzKINCp1+ChJq15VSfitk1n8f/lteK7oNdj8DC3Ak32esyDx/Hhpz OCCPgC8RGX7CL3Fqf1XYOWcehuro6FmiKhBFKgFl2FchRT57QYAhT1fd/2nmRTdo20 PqgM5uwBWyOJwP6BcAOczVqZzPLK3B7TqSaR5aHYtPjEYu52pP810KxSikCe4bxLUn vQOIY0SHU0DzYjISnFbsXYzEcEMNNSUTKvdgFTHuspQpdBR3oL0BB+mg45PyBTNC+9 RS62yvASGVi2eD8zScCyqkttmyA1qx3pn69ijefjjcoPRTxc7GsLdmzKwofUmZV2eQ SFu+hyQW9Eb7g== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 10:51:44 -0000 On Mon, Apr 29, 2013 at 09:22:06AM +0100, Steven Hartland wrote: > ----- Original Message ----- From: "Ajit Jain" > > > >I am running zfs with trim functionality (ported from head). Seeing data > >corruption when running iotest* with multiple threads (never saw data > >corruption with single thread). > > > >The patches merged to add trim support are as follows: > >1. 240868 (zfs trim patch) > >2. 230053 and 245252 (block device driver trim support) > >3. 239655 (fix an issue in patch 230053) > > > >I am "NOT" seeing data corruption in the following cases: > >1. Running iotest with single thread (Trim is enabled at entire io stack). > >2. Trim is enabled at zfs layer but disable at driver layer i.e. delete > >method is set to NONE (even with multiple threads). > > > > > >Since patch 240868 alone was not working as I pulled in additional zfs trim > >patches 244155, 244187, 244188, 248572 (however I am not using separate > >L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the > >same issue. > > > >Issue: After some time running with multiple thread write system call > >return sometimes with EIO or 122 (checksum error) error code. > > > >I looked at GEOM code a bit I think it already has the trim (DELETE) > >command support. Still I am doubtful if I have pulled in all required > >patches in the entire I/O stack. > > > >I am using a LSI SAS HBA card to connect to the SSD, firmware seems to > >claim the support for trim. > > > >*iotest: non standard freebsd FreeBSD utility, which creates files and does > >I/O on the files and can be invoked in single/multithread mode to do the > >I/O. > > What version are you porting the changes to? > > What SSD are you using? > > What LSI controller are you using? I'd also like to see "zpool status" (for every pool that involves this SSD) and "gpart show" against the disk itself. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 10:56:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9E0B0CC4 for ; Mon, 29 Apr 2013 10:56:54 +0000 (UTC) (envelope-from prvs=1831672f64=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id 205311804 for ; Mon, 29 Apr 2013 10:56:53 +0000 (UTC) Received: from r2d2 ([46.65.172.4]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50003535210.msg for ; Mon, 29 Apr 2013 11:56:52 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 29 Apr 2013 11:56:52 +0100 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 46.65.172.4 X-Return-Path: prvs=1831672f64=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: <3AD1AB31003D49B2BF2EA7DD411B38A2@multiplay.co.uk> From: "Steven Hartland" To: "Ajit Jain" , References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> <20130429105143.GA1492@icarus.home.lan> Subject: Re: seeing data corruption with zfs trim functionality Date: Mon, 29 Apr 2013 11:57:27 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 10:56:54 -0000 ----- Original Message ----- From: "Jeremy Chadwick" To: "Steven Hartland" Cc: "Ajit Jain" ; Sent: Monday, April 29, 2013 11:51 AM Subject: Re: seeing data corruption with zfs trim functionality > On Mon, Apr 29, 2013 at 09:22:06AM +0100, Steven Hartland wrote: >> ----- Original Message ----- From: "Ajit Jain" >> >> >> >I am running zfs with trim functionality (ported from head). Seeing data >> >corruption when running iotest* with multiple threads (never saw data >> >corruption with single thread). >> > >> >The patches merged to add trim support are as follows: >> >1. 240868 (zfs trim patch) >> >2. 230053 and 245252 (block device driver trim support) >> >3. 239655 (fix an issue in patch 230053) >> > >> >I am "NOT" seeing data corruption in the following cases: >> >1. Running iotest with single thread (Trim is enabled at entire io stack). >> >2. Trim is enabled at zfs layer but disable at driver layer i.e. delete >> >method is set to NONE (even with multiple threads). >> > >> > >> >Since patch 240868 alone was not working as I pulled in additional zfs trim >> >patches 244155, 244187, 244188, 248572 (however I am not using separate >> >L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the >> >same issue. >> > >> >Issue: After some time running with multiple thread write system call >> >return sometimes with EIO or 122 (checksum error) error code. >> > >> >I looked at GEOM code a bit I think it already has the trim (DELETE) >> >command support. Still I am doubtful if I have pulled in all required >> >patches in the entire I/O stack. >> > >> >I am using a LSI SAS HBA card to connect to the SSD, firmware seems to >> >claim the support for trim. >> > >> >*iotest: non standard freebsd FreeBSD utility, which creates files and does >> >I/O on the files and can be invoked in single/multithread mode to do the >> >I/O. >> >> What version are you porting the changes to? >> >> What SSD are you using? >> >> What LSI controller are you using? > > I'd also like to see "zpool status" (for every pool that involves this > SSD) and "gpart show" against the disk itself. Also: 1. What FW version is your LSI? You can get this from dmesg. 2. The exact command line your running iotest with? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 10:59:17 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id F20E2D9E for ; Mon, 29 Apr 2013 10:59:17 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta12.emeryville.ca.mail.comcast.net (qmta12.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:44:76:96:27:227]) by mx1.freebsd.org (Postfix) with ESMTP id D693C1829 for ; Mon, 29 Apr 2013 10:59:17 +0000 (UTC) Received: from omta13.emeryville.ca.mail.comcast.net ([76.96.30.52]) by qmta12.emeryville.ca.mail.comcast.net with comcast id Vmh21l00417UAYkACmzHBR; Mon, 29 Apr 2013 10:59:17 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta13.emeryville.ca.mail.comcast.net with comcast id VmzG1l00R1t3BNj8ZmzGMj; Mon, 29 Apr 2013 10:59:17 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 8EAFB73A1C; Mon, 29 Apr 2013 03:59:16 -0700 (PDT) Date: Mon, 29 Apr 2013 03:59:16 -0700 From: Jeremy Chadwick To: Steven Hartland Subject: Re: seeing data corruption with zfs trim functionality Message-ID: <20130429105916.GA1584@icarus.home.lan> References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> <20130429105143.GA1492@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130429105143.GA1492@icarus.home.lan> User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367233157; bh=fx+MnpWjfiwQrKzuoXXo/7JM6U1Vvh8OHHrVaSQx7JE=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=XRRYy08/Q/Nk9JzqRkAUIMFWvvNcnsT6C6MDeo7fxSHhBR92rtqXwRLi4krydOjKM Nnjqp5ifbdjlkC1Fw9/3TQU7H+v+rS6PIRgf4ptML84+x9esGYg4PYoNR1P/ksvs9l QvdhOb94xwMVTjPdsVjaMbVKK86JONLyAqKxfS57vfEPH0f4LWhvnr0KJ99mHzv3oK DGHz6jvCN7dMyOVxZfqm6Xqm/kEC0P0mcSgNrcbK5EPyZztwfFY3zKKfK18g2dC7Mg Z78OKZUIAYMehZR+drX/+ew+eZIokHBz0uVd0DVFb/XrSNGCPH/3NZ4/agge6xAy97 KUDoFin70zHyA== Cc: freebsd-fs@freebsd.org, Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 10:59:18 -0000 On Mon, Apr 29, 2013 at 03:51:43AM -0700, Jeremy Chadwick wrote: > On Mon, Apr 29, 2013 at 09:22:06AM +0100, Steven Hartland wrote: > > ----- Original Message ----- From: "Ajit Jain" > > > > > > >I am running zfs with trim functionality (ported from head). Seeing data > > >corruption when running iotest* with multiple threads (never saw data > > >corruption with single thread). > > > > > >The patches merged to add trim support are as follows: > > >1. 240868 (zfs trim patch) > > >2. 230053 and 245252 (block device driver trim support) > > >3. 239655 (fix an issue in patch 230053) > > > > > >I am "NOT" seeing data corruption in the following cases: > > >1. Running iotest with single thread (Trim is enabled at entire io stack). > > >2. Trim is enabled at zfs layer but disable at driver layer i.e. delete > > >method is set to NONE (even with multiple threads). > > > > > > > > >Since patch 240868 alone was not working as I pulled in additional zfs trim > > >patches 244155, 244187, 244188, 248572 (however I am not using separate > > >L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the > > >same issue. > > > > > >Issue: After some time running with multiple thread write system call > > >return sometimes with EIO or 122 (checksum error) error code. > > > > > >I looked at GEOM code a bit I think it already has the trim (DELETE) > > >command support. Still I am doubtful if I have pulled in all required > > >patches in the entire I/O stack. > > > > > >I am using a LSI SAS HBA card to connect to the SSD, firmware seems to > > >claim the support for trim. > > > > > >*iotest: non standard freebsd FreeBSD utility, which creates files and does > > >I/O on the files and can be invoked in single/multithread mode to do the > > >I/O. > > > > What version are you porting the changes to? > > > > What SSD are you using? > > > > What LSI controller are you using? > > I'd also like to see "zpool status" (for every pool that involves this > SSD) and "gpart show" against the disk itself. Also, the controller involved is an mps(4) controller, which to the underlying subsystem is SCSI. TRIM (as it's called; the actual name per ATA standard is DATA SET MANAGEMENT) is purely an ATA specification thing. The SCSI equivalent is called UNMAP, or alternately WRITE SAME. (This is not the case here, but just mentioning it: even in the cases of SCSI controllers that have SATA disks attached, the OS ends up submitting UNMAP/WRITE SAME, which the controller has to convert into the relevant ATA DATA SET MANAGEMENT command. If the controller firmware screws this up there not much we can do about it) References for FreeBSD: http://lists.freebsd.org/pipermail/freebsd-current/2011-December/030714.html PLEASE READ THE LAST PARAGRAPH OF THAT POST. This brings into question whether or not relevant subsystems (ranging from mps(4) to GEOM(4) to CAM(4)) actually have working UNMAP/WRITE SAME or not, or if the controller itself is doing something stupid with them. I'm CC'ing mav@ for what should be obvious reasons. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 11:06:44 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 14F7131F for ; Mon, 29 Apr 2013 11:06:44 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id EAC1D1919 for ; Mon, 29 Apr 2013 11:06:43 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r3TB6hPJ018108 for ; Mon, 29 Apr 2013 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r3TB6hxG018106 for freebsd-fs@FreeBSD.org; Mon, 29 Apr 2013 11:06:43 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 29 Apr 2013 11:06:43 GMT Message-Id: <201304291106.r3TB6hxG018106@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 11:06:44 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/174060 fs [ext2fs] Ext2FS system crashes (buffer overflow?) o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 308 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 11:08:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 76056744; Mon, 29 Apr 2013 11:08:10 +0000 (UTC) (envelope-from prvs=1831672f64=killing@multiplay.co.uk) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) by mx1.freebsd.org (Postfix) with ESMTP id E9CF91B1C; Mon, 29 Apr 2013 11:08:09 +0000 (UTC) Received: from r2d2 ([46.65.172.4]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50003535328.msg; Mon, 29 Apr 2013 12:08:07 +0100 X-Spam-Processed: mail1.multiplay.co.uk, Mon, 29 Apr 2013 12:08:07 +0100 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 46.65.172.4 X-Return-Path: prvs=1831672f64=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <6240B204CEB04158968B0C7AAAA98248@multiplay.co.uk> From: "Steven Hartland" To: "Jeremy Chadwick" References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> <20130429105143.GA1492@icarus.home.lan> <20130429105916.GA1584@icarus.home.lan> Subject: Re: seeing data corruption with zfs trim functionality Date: Mon, 29 Apr 2013 12:08:39 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org, Alexander Motin X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 11:08:10 -0000 ----- Original Message ----- From: "Jeremy Chadwick" To: "Steven Hartland" Cc: ; "Alexander Motin" Sent: Monday, April 29, 2013 11:59 AM Subject: Re: seeing data corruption with zfs trim functionality > On Mon, Apr 29, 2013 at 03:51:43AM -0700, Jeremy Chadwick wrote: >> On Mon, Apr 29, 2013 at 09:22:06AM +0100, Steven Hartland wrote: >> > ----- Original Message ----- From: "Ajit Jain" >> > >> > >> > >I am running zfs with trim functionality (ported from head). Seeing data >> > >corruption when running iotest* with multiple threads (never saw data >> > >corruption with single thread). >> > > >> > >The patches merged to add trim support are as follows: >> > >1. 240868 (zfs trim patch) >> > >2. 230053 and 245252 (block device driver trim support) >> > >3. 239655 (fix an issue in patch 230053) >> > > >> > >I am "NOT" seeing data corruption in the following cases: >> > >1. Running iotest with single thread (Trim is enabled at entire io stack). >> > >2. Trim is enabled at zfs layer but disable at driver layer i.e. delete >> > >method is set to NONE (even with multiple threads). >> > > >> > > >> > >Since patch 240868 alone was not working as I pulled in additional zfs trim >> > >patches 244155, 244187, 244188, 248572 (however I am not using separate >> > >L2arc device), 248573, 248574, 248575 and 248576. Still I am seeing the >> > >same issue. >> > > >> > >Issue: After some time running with multiple thread write system call >> > >return sometimes with EIO or 122 (checksum error) error code. >> > > >> > >I looked at GEOM code a bit I think it already has the trim (DELETE) >> > >command support. Still I am doubtful if I have pulled in all required >> > >patches in the entire I/O stack. >> > > >> > >I am using a LSI SAS HBA card to connect to the SSD, firmware seems to >> > >claim the support for trim. >> > > >> > >*iotest: non standard freebsd FreeBSD utility, which creates files and does >> > >I/O on the files and can be invoked in single/multithread mode to do the >> > >I/O. >> > >> > What version are you porting the changes to? >> > >> > What SSD are you using? >> > >> > What LSI controller are you using? >> >> I'd also like to see "zpool status" (for every pool that involves this >> SSD) and "gpart show" against the disk itself. > > Also, the controller involved is an mps(4) controller, which to the > underlying subsystem is SCSI. > > TRIM (as it's called; the actual name per ATA standard is DATA SET > MANAGEMENT) is purely an ATA specification thing. > > The SCSI equivalent is called UNMAP, or alternately WRITE SAME. > > (This is not the case here, but just mentioning it: even in the cases of > SCSI controllers that have SATA disks attached, the OS ends up > submitting UNMAP/WRITE SAME, which the controller has to convert into > the relevant ATA DATA SET MANAGEMENT command. If the controller > firmware screws this up there not much we can do about it) > > References for FreeBSD: > > http://lists.freebsd.org/pipermail/freebsd-current/2011-December/030714.html > > PLEASE READ THE LAST PARAGRAPH OF THAT POST. > > This brings into question whether or not relevant subsystems (ranging > from mps(4) to GEOM(4) to CAM(4)) actually have working UNMAP/WRITE SAME > or not, or if the controller itself is doing something stupid with them. > > I'm CC'ing mav@ for what should be obvious reasons. ZFS "TRIM" just uses BIO_DELETE, which is translated to the relevant supported delete_method. For SATA disks this can be UNMAP, TRIM etc even when connected to SCSI controller; for SAS disks this can UNMAP, WS16 etc. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 13:34:40 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 69E236EE; Mon, 29 Apr 2013 13:34:40 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail28.syd.optusnet.com.au (mail28.syd.optusnet.com.au [211.29.133.169]) by mx1.freebsd.org (Postfix) with ESMTP id 079D01560; Mon, 29 Apr 2013 13:34:39 +0000 (UTC) Received: from c211-30-173-106.carlnfd1.nsw.optusnet.com.au (c211-30-173-106.carlnfd1.nsw.optusnet.com.au [211.30.173.106]) by mail28.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id r3TDYUD3020661 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 29 Apr 2013 23:34:31 +1000 Date: Mon, 29 Apr 2013 23:34:30 +1000 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: "Kenneth D. Merry" Subject: Re: patches to add new stat(2) file flags In-Reply-To: <20130426221023.GA86767@nargothrond.kdm.org> Message-ID: <20130429231638.N1440@besplex.bde.org> References: <20130307000533.GA38950@nargothrond.kdm.org> <20130307222553.P981@besplex.bde.org> <20130308232155.GA47062@nargothrond.kdm.org> <20130310181127.D2309@besplex.bde.org> <20130409190838.GA60733@nargothrond.kdm.org> <20130418184951.GA18777@nargothrond.kdm.org> <20130419215624.L1262@besplex.bde.org> <20130426221023.GA86767@nargothrond.kdm.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.0 cv=Tre+H0rh c=1 sm=1 a=n2O7wv11oSwA:10 a=kj9zAlcOel0A:10 a=PO7r1zJSAAAA:8 a=JzwRw_2MAAAA:8 a=YOiZBDKP_E4A:10 a=wjh2EmcBLpGD4jLuB4EA:9 a=CjuIK1q_8ugA:10 a=TEtd8y5WR3g2ypngnwZWYw==:117 Cc: arch@freebsd.org, fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 13:34:40 -0000 On Fri, 26 Apr 2013, Kenneth D. Merry wrote: I haven't looked at this much. Just a quick reply since I will be away for a while. > On Fri, Apr 19, 2013 at 22:53:50 +1000, Bruce Evans wrote: >> On Thu, 18 Apr 2013, Kenneth D. Merry wrote: >> >>> On Tue, Apr 09, 2013 at 13:08:38 -0600, Kenneth D. Merry wrote: >>>> ... >>>> Okay, I think these issues should now be fixed. We now refuse to change >>>> attributes only on the root directory. And I updatd deupdat() to do the >>>> same. >>>> >>>> When a directory is created or a file is added, the archive bit is not >>>> changed on the directory. Not sure if we need to do that or not. (Simply >>>> changing msdosfs_mkdir() to set ATTR_ARCHIVE was not enough to get the >>>> archive bit set on directory creation.) >>> >>> Bruce, any comment on this? >> >> I didn't get around to looking at it closely. Just had a quick look at >> the msdosfs parts. >> >> Apparently we are already doing the same as WinXP for ATTR_ARCHIVE on >> directories. Not the right thing, but: >> - don't set it on directory creation >> - don't set it on directory modification >> - allow setting and clearing it (with your changes). Further testing showed the same behaviour for ntfs under WinXP (you can manage all the attribute bits for directories, but they don't control anything for directories, at least using Cygwin utilities). About not setting the archive bit in for modifications of directories in msdosfs: most settings of this bit are managed by the DETIMES() macro. It is set when the directory mtime is set (the denode is first marked for update of the mtime -- DE_UPDATE flag). But since modifications of directories don't change the mtime (we are bug for bug compatible with Win/DOS here), this never sets the archive bit for directories. The mtime can be changed for directories using utimes() in my version but not in -current, and using some Win/DOS syscall. I'm setting the archive bit for this, but will change to be bug for bug compatible with Win/DOS by not setting it. Then only chflags will set it for directories. >> @ *** src/lib/libc/sys/chflags.2.orig >> @ --- src/lib/libc/sys/chflags.2 >> @ *************** >> @ *** 112,137 **** >> @ ... >> @ --- 112,170 ---- >> @ ... >> @ + .It Dv UF_IMMUTABLE >> @ + The file may not be changed. >> @ + Filesystems may use this flag to maintain compatibility with the DOS, >> Windows >> @ + and CIFS FILE_ATTRIBUTE_READONLY attribute. >> >> msdosfs doesn't use this yet. It uses ATTR_READONLY, and doesn't map this >> to or from UF_IMMUTABLE. I think I want ATTR_READONLY to be a flag and >> not affect the file permissions (just like immutable flags normally don't >> affect the file permissions. > > Okay, done. The permissions are now always 755, and writeability is > controlled by ATTR_READONLY. Should be 755 by default, but there is a mount option to change this. >> Does CIFS FILE_ATTRIBUTE_READONLY have exactly the same semantics as >> IMMUTABLE? That is, does it prevent all operations on the file and the >> ... > Okay. I added a new flag, UF_READONLY that maps to ATTR_READONLY directly > instead of using an immutable flag. > ... > The other outstanding issue is the suggestion by Gordon Ross on the Illumos > developers list to make ZFS not enforce the readonly bit. It looks like it > has not yet gone into Illumos. We may not want to make the change in > FreeBSD since it hasn't gone in upstream yet. This shows that we want to not enforce the readonly bit (or other flags) even for msdosfs. msdosfs is a good place to test changing the policy since there aren't many critical file systems using it. Bruce From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 16:01:55 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 0FD9B87E; Mon, 29 Apr 2013 16:01:55 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-qa0-x236.google.com (mail-qa0-x236.google.com [IPv6:2607:f8b0:400d:c00::236]) by mx1.freebsd.org (Postfix) with ESMTP id B79111DDA; Mon, 29 Apr 2013 16:01:54 +0000 (UTC) Received: by mail-qa0-f54.google.com with SMTP id j8so1115341qah.20 for ; Mon, 29 Apr 2013 09:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=R/n7br/gvHdbsh1TqJBIQN6EVqpd7349wGesCj8xdTw=; b=mYInyK2by2fy2tEHx6Ex9h41ZnTYz4LaOA6wm7ZZgi3b9EpbfhM7BtBN+dYzbMH1MR 1K+ijnjmb95cvNNLjZHSoLWnZnXhTZwBVZqeV4bDohN/lLnjf97zwOcG59zsFs3DDM50 RUEITu6s9eOJvh7hX8SoA7zoL1kYeMTK0KiQaZK2l2OpbUih+tLZNDIoVByGVkk+ovjP DzVZQWd97JgTgjpHYHl4ojLD9nViV/MS+NO+b/ZAHpUsldI1T8yBVCseeoe4lovRmuts 0z9SP13eJMqjUH2l5xF5xkQyZPQ3u6qz0I83Zx1Wt4WncdvJj3UD6BPtn4n3aXJGbPPu DyDQ== MIME-Version: 1.0 X-Received: by 10.229.61.40 with SMTP id r40mr14008487qch.81.1367251314241; Mon, 29 Apr 2013 09:01:54 -0700 (PDT) Received: by 10.49.1.44 with HTTP; Mon, 29 Apr 2013 09:01:54 -0700 (PDT) In-Reply-To: <6F1DCA37-9159-47F2-9ABB-FDEC41665611@digsys.bg> References: <51430744.6020004@FreeBSD.org> <51487CE1.5090703@FreeBSD.org> <6F1DCA37-9159-47F2-9ABB-FDEC41665611@digsys.bg> Date: Mon, 29 Apr 2013 09:01:54 -0700 Message-ID: Subject: Re: Strange slowdown when cache devices enabled in ZFS From: Freddie Cash To: Daniel Kalchev Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: FreeBSD Filesystems , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 16:01:55 -0000 The following settings in /etc/sysctl.conf prevent the "stalls" completely, even when the L2ARC devices are 100% full and all RAM is wired into the ARC. Been running without issues for 5 days now: vfs.zfs.l2arc_norw=0 # Default is 1 vfs.zfs.l2arc_feed_again=0 # Default is 1 vfs.zfs.l2arc_noprefetch=0 # Default is 0 vfs.zfs.l2arc_feed_min_ms=1000 # Default is 200 vfs.zfs.l2arc_write_boost=320000000 # Default is 8 MBps vfs.zfs.l2arc_write_max=160000000 # Default is 8 MBps With these settings, I'm also able to expand the ARC to use the full 128 GB of RAM in the biggest box, and to use both L2ARC devices (60 GB in total). And, can set primarycache and secondarycache to all (the default) instead of just metadata. The only two settings in sysctl.conf that I've changed since the "stalls" began are: vfs.zfs.l2arc_norw vfs.zfs.l2arc_feed_again The other settings were already set when the boxes went live a few months back. I've run the dtrace hotkernel script from -HEAD on each of the 9-STABLE boxes. Not really sure how to interpret the results. I ran the script (on the receiving box) while doing a full zfs send/recv with the norw set to 1 (default) and then again with it set to 0. Here's the top 20 entries from each run. There doesn't seem to be much of a difference in the output: ==> hotkernel.norw0 <== kernel`_sx_try_xlock 48996 0.1% kernel`bzero 50683 0.1% kernel`_mtx_lock_sleep 71089 0.1% kernel`hpet_get_timecount 81257 0.2% kernel`atomic_add_long 97927 0.2% kernel`_sx_xunlock 122009 0.2% kernel`_sx_xlock 122743 0.2% zfs.ko`l2arc_write_eligible 134717 0.3% kernel`bcopy 157756 0.3% zfs.ko`buf_hash 184670 0.4% zfs.ko`l2arc_feed_thread 295029 0.6% zfs.ko`list_next 309683 0.6% kernel`sched_idletd 316410 0.6% kernel`spinlock_exit 337550 0.7% zfs.ko`SHA256_Transform 427619 0.9% zfs.ko`lzjb_compress 500649 1.0% kernel`_sx_xlock_hard 537433 1.1% kernel`cpu_idle_mwait 3214405 6.5% kernel`acpi_cpu_c1 41039557 83.2% ==> hotkernel.norw1 <== kernel`copyin 49078 0.2% kernel`_mtx_lock_sleep 54643 0.2% kernel`_rw_wlock_hard 60637 0.2% kernel`hpet_get_timecount 64973 0.2% kernel`atomic_add_long 83166 0.3% zfs.ko`l2arc_write_eligible 104574 0.3% kernel`_sx_xunlock 108864 0.4% kernel`_sx_xlock 112804 0.4% zfs.ko`buf_hash 151566 0.5% kernel`bcopy 161269 0.5% zfs.ko`l2arc_feed_thread 240827 0.8% zfs.ko`list_next 252949 0.8% kernel`spinlock_exit 361583 1.2% kernel`sched_idletd 407961 1.3% zfs.ko`SHA256_Transform 546927 1.8% kernel`_sx_xlock_hard 629281 2.0% zfs.ko`lzjb_compress 634820 2.1% kernel`cpu_idle_mwait 3359336 10.9% kernel`acpi_cpu_c1 22177795 71.9% -- Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 16:59:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 376EA9E9 for ; Mon, 29 Apr 2013 16:59:33 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id A8356100E for ; Mon, 29 Apr 2013 16:59:32 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id 239311E46477; Mon, 29 Apr 2013 13:59:29 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 18778-08; Mon, 29 Apr 2013 16:59:28 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 682E01E46475; Mon, 29 Apr 2013 13:59:27 -0300 (ADT) Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <1904620729.1184886.1367020597750.JavaMail.root@erie.cs.uoguelph.ca> Date: Mon, 29 Apr 2013 09:59:25 -0700 Message-Id: <5503E3D8-20A4-422E-A8E4-176DD64EB2D3@hub.org> References: <1904620729.1184886.1367020597750.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 16:59:33 -0000 On 2013-04-26, at 16:56 , Rick Macklem wrote: > If you didn't unmount/remount between writing jboss to the server and > timing the startup of it, please try it again after doing a = dismount/mount. > (Doing the dismount/mount on the Linux client resulted in the same # = of > reads as FreeBSD for a quick test I did, instead of none without the > dismount/remount.) 'k, this one was tried on Friday, and even a full server reboot didn't = make any difference in performance, whether the first run or subsequent = ones =85 its just plain fast =85 > A few other things to do: > - Time multiple startups after doing a mount, to see if it only the > first one that is slow. Tried =85 all are equally slow =85 best time so far has been ~230s =85 = yup, after several start ups, its pretty consistently around the 240s = mark =85 > - Capture the RPC counts for both clients by doing "nfsstat -c" before > and after the startup. FreeBSD: Before: Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 2745853 821481 973901 18 2230947 2098303 160726 = 4954 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 1862 0 0 14724 950 16272 0 = 329756 Mknod Fsstat Fsinfo PathConf Commit 12 30873 5 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 9430761 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 26322016 2745853 20537972 973869 2373488 2225801 2618800 = 2097243 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 1262 18 46863 15678 29941 0 22513185 = 329759 After: Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 2745919 821481 973912 18 2230947 2098303 160726 = 4954 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 1862 0 0 14724 950 16272 0 = 329767 Mknod Fsstat Fsinfo PathConf Commit 12 30873 5 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 9430849 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 26323022 2745919 20538207 973880 2374208 2225801 2618800 = 2097243 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 1262 18 46863 15678 29941 0 22513489 = 329770 Okay, if I'm reading the above right =85 there doesn't look to be *alot* = of difference between the Before n After =85 it doesn't look like its = downing a whole lot of NFS ops =85 am I reading wrong? > If the above doesn't give you any good hints w.r.t. why it is slow, > you can capture packets during the startup for both clients and look > at them in wireshark, to try and figure out what the difference = between > the Linux and FreeBSD clients are for this case. If the above nfsstat output indicates this is warranted, then please = provide more information on what I should run =85 From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 21:33:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id F400E822 for ; Mon, 29 Apr 2013 21:33:19 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id BF5201482 for ; Mon, 29 Apr 2013 21:33:19 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAKLmflGDaFvO/2dsb2JhbABShnS7FoEddIIfAQEFI1YbGAICDRkCWQYTiBatc5BkgSOMNYEOAS4FB4I8gRMDlx6RKoMtIIE3NQ X-IronPort-AV: E=Sophos;i="4.87,576,1363147200"; d="scan'208";a="27743597" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 29 Apr 2013 17:33:12 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 8ABB4B3F1D; Mon, 29 Apr 2013 17:33:12 -0400 (EDT) Date: Mon, 29 Apr 2013 17:33:12 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <1394524226.1234372.1367271192522.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <5503E3D8-20A4-422E-A8E4-176DD64EB2D3@hub.org> Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 21:33:20 -0000 Marc G. Fournier wrote: > On 2013-04-26, at 16:56 , Rick Macklem < rmacklem@uoguelph.ca > wrote: >=20 >=20 > If you didn't unmount/remount between writing jboss to the server and > timing the startup of it, please try it again after doing a > dismount/mount. > (Doing the dismount/mount on the Linux client resulted in the same # > of > reads as FreeBSD for a quick test I did, instead of none without the > dismount/remount.) >=20 >=20 > 'k, this one was tried on Friday, and even a full server reboot didn't > make any difference in performance, whether the first run or > subsequent ones =E2=80=A6 its just plain fast =E2=80=A6 >=20 >=20 >=20 >=20 >=20 >=20 > A few other things to do: > - Time multiple startups after doing a mount, to see if it only the > first one that is slow. >=20 >=20 > Tried =E2=80=A6 all are equally slow =E2=80=A6 best time so far has been = ~230s =E2=80=A6 yup, > after several start ups, its pretty consistently around the 240s mark > =E2=80=A6 >=20 >=20 >=20 >=20 > - Capture the RPC counts for both clients by doing "nfsstat -c" before > and after the startup. >=20 >=20 > FreeBSD: >=20 >=20 > Before: >=20 >=20 >=20 >=20 >=20 >=20 > Client Info: > Rpc Counts: > Getattr Setattr Lookup Readlink Read Write Create Remove > 2745853 821481 973901 18 2230947 2098303 160726 4954 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 1862 0 0 14724 950 16272 0 329756 > Mknod Fsstat Fsinfo PathConf Commit > 12 30873 5 0 0 > Rpc Info: > TimedOut Invalid X Replies Retries Requests > 0 0 0 0 9430761 > Cache Info: > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > 26322016 2745853 20537972 973869 2373488 2225801 2618800 2097243 > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > 1262 18 46863 15678 29941 0 22513185 329759 >=20 > After: >=20 >=20 >=20 >=20 >=20 >=20 > Client Info: > Rpc Counts: > Getattr Setattr Lookup Readlink Read Write Create Remove > 2745919 821481 973912 18 2230947 2098303 160726 4954 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 1862 0 0 14724 950 16272 0 329767 > Mknod Fsstat Fsinfo PathConf Commit > 12 30873 5 0 0 > Rpc Info: > TimedOut Invalid X Replies Retries Requests > 0 0 0 0 9430849 > Cache Info: > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > 26323022 2745919 20538207 973880 2374208 2225801 2618800 2097243 > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > 1262 18 46863 15678 29941 0 22513489 329770 >=20 >=20 >=20 >=20 > Okay, if I'm reading the above right =E2=80=A6 there doesn't look to be *= alot* > of difference between the Before n After =E2=80=A6 it doesn't look like i= ts > downing a whole lot of NFS ops =E2=80=A6 am I reading wrong? >=20 Yep. Taking the difference between before and after I see: Getattr 66, Lookup 11, Access 11 for a total of 88 RPCs That is "no load" on an NFS server. >=20 >=20 >=20 > If the above doesn't give you any good hints w.r.t. why it is slow, > you can capture packets during the startup for both clients and look > at them in wireshark, to try and figure out what the difference > between > the Linux and FreeBSD clients are for this case. >=20 >=20 > If the above nfsstat output indicates this is warranted, then please > provide more information on what I should run =E2=80=A6 Well, you can capture packets for the above. Run on the client: # tcpdump -s 0 -w startup.pcap host - start it before doing the startup and kill it after the startup has been completed. You can then look at startup.pcap in wireshark. What would I be looking for? Strange stuff like TCP retries/reconnects or large time delays between requests and replies. If all you see are 88 RPCs with replies that arrive shortly after they are sent, then I have no idea why it is slow, but it isn't the NFS protocol. (Maybe "jboss" doesn't like some attribute returned when stat'ng a file and then does "who knows what that takes a longgg time?".) If you want, you can email me startup.pcap as an attachment and I'll take a look, but wireshark is pretty good at spotting TCP retransmits, etc. The only other thing I can suggest is taking the "soft,intr" options off your mount and see if that has any effect. Maybe some syscall is returning EINTR and confusing jboss? Good luck with it, rick From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 21:44:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 5FEE3BD5 for ; Mon, 29 Apr 2013 21:44:31 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2B2D61513 for ; Mon, 29 Apr 2013 21:44:30 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAProflGDaFvO/2dsb2JhbABSgz2DN7sWgR50gh8BAQUjVhsOCgICDRkCWQYTGYd9rXiQZ4EjjCyBFzQHgjyBEwOXHpEqgy0ggWw X-IronPort-AV: E=Sophos;i="4.87,576,1363147200"; d="scan'208";a="27744586" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 29 Apr 2013 17:44:30 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 22BC1B3F4E; Mon, 29 Apr 2013 17:44:30 -0400 (EDT) Date: Mon, 29 Apr 2013 17:44:30 -0400 (EDT) From: Rick Macklem To: Jeremy Chadwick Message-ID: <2128643857.1234644.1367271870131.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20130429015700.GA91179@icarus.home.lan> Subject: Re: nfsv3 vs nfsv4 ? advantages of moving to v4? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org, Olav =?unknown-8bit?B?R3L4buVz?= Gjerde X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 21:44:31 -0000 Jeremy Chadwick wrote: > On Sun, Apr 28, 2013 at 07:10:03PM +0200, Olav Grns Gjerde wrote: > > If you have three ZFS filesystems: > > tank > > tank/backup > > tank/home > > > > And if you export /tank with nfsv3, you don't really export > > /tank/backup > > and /tank/home. > > You only export the folders, but not it's content > > I think it has to do with that you cannot export mounted filesystems > > within > > one exported filesystem. > > > > With nfsv4 you will with only one export of /tank, export all three, > > including /tank/backup and /tank/home > > > > This was an issue 18 months ago, I cannot confirm if it's still an > > issue. > > Maybe I'm still misunderstanding, but it sounds like what you want > (for > NFSv3) is the -alldirs option, e.g.: > > /tank -alldirs 10.0.0.20 > > Which would allow 10.0.0.20 to mount /tank, /tank/backup, /tank/home, > or whatever else under /tank, with NFSv3. > I think he was actually referring to what mounts the client has to do and not what the exports need to be. Assuming there are 3 different file systems: - NFSv3 must do 3 mounts - NFSv4 just needs to mount /tank I don't see this as much of an issue for 3 file systems, but I suppose that it can become inconvenient to mount each one if there are 300 or 3000 file systems under tank. rick > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 22:35:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 9534945F for ; Mon, 29 Apr 2013 22:35:20 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 639EB1BC1 for ; Mon, 29 Apr 2013 22:35:20 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id 0B7141E46477; Mon, 29 Apr 2013 19:35:19 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 05878-05; Mon, 29 Apr 2013 22:35:18 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 52FF31E46475; Mon, 29 Apr 2013 19:35:18 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <1394524226.1234372.1367271192522.JavaMail.root@erie.cs.uoguelph.ca> Date: Mon, 29 Apr 2013 15:35:17 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <1394524226.1234372.1367271192522.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 22:35:20 -0000 On 2013-04-29, at 14:33 , Rick Macklem wrote: >>=20 >> Okay, if I'm reading the above right =85 there doesn't look to be = *alot* >> of difference between the Before n After =85 it doesn't look like its >> downing a whole lot of NFS ops =85 am I reading wrong? >>=20 > Yep. Taking the difference between before and after I see: > Getattr 66, Lookup 11, Access 11 for a total of 88 RPCs >=20 > That is "no load" on an NFS server. That agrees then with what I got from the NetApp tech also =85 his = expectation when we started looking into this (I looked at NetApp as = being the problem first) was that he'd see high GetAttr calls, but we = didn't =85 Let me play with the other things next =85 Thanks ...= From owner-freebsd-fs@FreeBSD.ORG Mon Apr 29 23:45:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5846C765 for ; Mon, 29 Apr 2013 23:45:11 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id A2EB71EAE for ; Mon, 29 Apr 2013 23:45:10 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id 9CA5E1E46477; Mon, 29 Apr 2013 20:45:08 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 42408-02; Mon, 29 Apr 2013 23:45:07 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id CE8DF1E46475; Mon, 29 Apr 2013 20:45:06 -0300 (ADT) Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <1394524226.1234372.1367271192522.JavaMail.root@erie.cs.uoguelph.ca> Date: Mon, 29 Apr 2013 16:45:05 -0700 Message-Id: <1175A4B1-414B-44FA-8E20-FA096FA44C4C@hub.org> References: <1394524226.1234372.1367271192522.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Apr 2013 23:45:11 -0000 >=20 > If you want, you can email me startup.pcap as an attachment and I'll = take > a look, but wireshark is pretty good at spotting TCP retransmits, etc. 'k, at 4.x Gig in size, doubt your mail server will handle me sending = this to you :) Even compressed, it was going over 400M =85 I can if you = want it though =85 Baring that, if you want to give me pointers as to what I should be = looking for? I have Wireshark installed / and the startup.pcap data = loaded =85 the only thing that is jumping out at me is a bunch of lines = where 'Length' is 32982 while most are 218 =85 highlighted in 'black = background, red font' =85 for example: i am doing a bzip2 compressed file right now, and will make it available = via HTTP, if you are interested =85 > The only other thing I can suggest is taking the "soft,intr" options = off > your mount and see if that has any effect. Maybe some syscall is = returning > EINTR and confusing jboss? Tried removing soft,intr =85 no change, still around 240s =85 From owner-freebsd-fs@FreeBSD.ORG Tue Apr 30 12:29:29 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 16943282 for ; Tue, 30 Apr 2013 12:29:29 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id D262E106F for ; Tue, 30 Apr 2013 12:29:27 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAFC4f1GDaFvO/2dsb2JhbABSgz2DN7sggRh0gh8BAQUjVhsYERkCWQaIKa8RgkCOP41YEH40B4I8gRMDlx6RKoMtIIE3NQ X-IronPort-AV: E=Sophos;i="4.87,580,1363147200"; d="scan'208";a="26065480" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 30 Apr 2013 08:29:20 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 83082B4026; Tue, 30 Apr 2013 08:29:20 -0400 (EDT) Date: Tue, 30 Apr 2013 08:29:20 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <311431419.1611.1367324960508.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1175A4B1-414B-44FA-8E20-FA096FA44C4C@hub.org> Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) MIME-Version: 1.0 X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Content-Type: multipart/related; boundary="----=_Part_1610_442504607.1367324960506" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Apr 2013 12:29:29 -0000 ------=_Part_1610_442504607.1367324960506 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Marc G. Fournier wrote: > If you want, you can email me startup.pcap as an attachment and I'll > take > a look, but wireshark is pretty good at spotting TCP retransmits, etc. >=20 >=20 > 'k, at 4.x Gig in size, doubt your mail server will handle me sending > this to you :) Even compressed, it was going over 400M =E2=80=A6 I can if= you > want it though =E2=80=A6 > >=20 > Baring that, if you want to give me pointers as to what I should be > looking for? I have Wireshark installed / and the startup.pcap data > loaded =E2=80=A6 the only thing that is jumping out at me is a bunch of l= ines > where 'Length' is 32982 while most are 218 =E2=80=A6 highlighted in 'blac= k > background, red font' =E2=80=A6 for example: >=20 The big one is a write RPC and it will be a little more than 32768, if you've set wsize=3D32768. This can't be a capture for the "nfsstat" numbers you emailed the last time= . (For one thing, that one didn't have any write RPCs counted.) Try and get a capture for the case where there are few NFS RPCs. (Did you capture for the first time doing the startup after doing a mount vs do an "nfsstat" for a subsequent startup?) Or, is this client doing something else on the network while the startup is happening? I may take a look at it, to see if I can spot anything weird, but a capture when it only does 88 RPCs is going to be much easier to look at. rick >=20 >=20 >=20 >=20 > i am doing a bzip2 compressed file right now, and will make it > available via HTTP, if you are interested =E2=80=A6 >=20 >=20 >=20 >=20 >=20 >=20 > The only other thing I can suggest is taking the "soft,intr" options > off > your mount and see if that has any effect. Maybe some syscall is > returning > EINTR and confusing jboss? >=20 >=20 > Tried removing soft,intr =E2=80=A6 no change, still around 240s =E2=80=A6 ------=_Part_1610_442504607.1367324960506-- From owner-freebsd-fs@FreeBSD.ORG Tue Apr 30 17:08:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5BE8C10B for ; Tue, 30 Apr 2013 17:08:48 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id EB6811F08 for ; Tue, 30 Apr 2013 17:08:47 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id B2244133EE11; Tue, 30 Apr 2013 14:08:45 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 96909-10; Tue, 30 Apr 2013 17:08:45 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id B79B9133EE10; Tue, 30 Apr 2013 14:08:44 -0300 (ADT) Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <311431419.1611.1367324960508.JavaMail.root@erie.cs.uoguelph.ca> Date: Tue, 30 Apr 2013 10:08:43 -0700 Message-Id: <531F8BBE-1476-4591-BABD-EA6B230ADB44@hub.org> References: <311431419.1611.1367324960508.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Apr 2013 17:08:48 -0000 On 2013-04-30, at 05:29 , Rick Macklem wrote: > Marc G. Fournier wrote: >> If you want, you can email me startup.pcap as an attachment and I'll >> take >> a look, but wireshark is pretty good at spotting TCP retransmits, = etc. >>=20 >>=20 >> 'k, at 4.x Gig in size, doubt your mail server will handle me sending >> this to you :) Even compressed, it was going over 400M =85 I can if = you >> want it though =85 >>=20 >>=20 >> Baring that, if you want to give me pointers as to what I should be >> looking for? I have Wireshark installed / and the startup.pcap data >> loaded =85 the only thing that is jumping out at me is a bunch of = lines >> where 'Length' is 32982 while most are 218 =85 highlighted in 'black >> background, red font' =85 for example: >>=20 > The big one is a write RPC and it will be a little more than 32768, if > you've set wsize=3D32768. >=20 > This can't be a capture for the "nfsstat" numbers you emailed the last = time. > (For one thing, that one didn't have any write RPCs counted.) Try and = get > a capture for the case where there are few NFS RPCs. (Did you capture = for > the first time doing the startup after doing a mount vs do an = "nfsstat" > for a subsequent startup?) Or, is this client doing something else on = the > network while the startup is happening? >=20 > I may take a look at it, to see if I can spot anything weird, but a = capture > when it only does 88 RPCs is going to be much easier to look at. 'k, here is what I just ran now =85 igb1 is the private IP network where = the NFS mount is running from =85 host 192.168.1.5 is the host IP of the = server that I'm using for testing =85 and the results again have WRITE = calls in it =85=20 note that when I run the tcpdump command, I'm in /tmp on the local file = system, so I'm not writing that log to the NFS server =85 and there is = nothing else running on the NFS mount, since all tha tis on it is = /usr/local/jboss=85 =85 the rest of /usr/local is on the local drives = also =85 the WRITEs are to /usr/local/jboss/standalone/logs =85 so not = sure why RPC Counts for Writes is showing 0 change =85 I'm building a new bz2 file right now, but it looks pretty similar to = the one I already sent you a URL for ... root@server04:/tmp # nfsstat -c; tcpdump -i igb1 -s 0 -w startup.pcap = host 192.168.1.5 Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 2746536 821481 974263 18 2230948 2098303 160726 = 4954 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 1862 0 0 14724 950 16272 0 = 330261 Mknod Fsstat Fsinfo PathConf Commit 12 30926 6 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 9432366 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 26331524 2746535 20540473 974231 2380075 2225802 2618800 = 2097243 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 1262 18 46863 15678 29941 0 22516391 = 330264 tcpdump: listening on igb1, link-type EN10MB (Ethernet), capture size = 65535 bytes ^C5454815 packets captured 5458144 packets received by filter 0 packets dropped by kernel root@server04:/tmp # nfsstat -c Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 2746603 821481 974276 18 2230953 2098303 160726 = 4954 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 1862 0 0 14724 950 16272 0 = 330275 Mknod Fsstat Fsinfo PathConf Commit 12 30926 6 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 9432465 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 26332545 2746602 20540712 974244 2380801 2225807 2618800 = 2097243 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 1262 18 46863 15678 29941 0 22516700 = 330278 >=20 > rick >=20 >>=20 >>=20 >>=20 >>=20 >> i am doing a bzip2 compressed file right now, and will make it >> available via HTTP, if you are interested =85 >>=20 >>=20 >>=20 >>=20 >>=20 >>=20 >> The only other thing I can suggest is taking the "soft,intr" options >> off >> your mount and see if that has any effect. Maybe some syscall is >> returning >> EINTR and confusing jboss? >>=20 >>=20 >> Tried removing soft,intr =85 no change, still around 240s =85 From owner-freebsd-fs@FreeBSD.ORG Wed May 1 09:03:28 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 23768386; Wed, 1 May 2013 09:03:28 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-ee0-f44.google.com (mail-ee0-f44.google.com [74.125.83.44]) by mx1.freebsd.org (Postfix) with ESMTP id 62A3312EB; Wed, 1 May 2013 09:03:26 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id t10so612009eei.3 for ; Wed, 01 May 2013 02:03:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:date:from:to:cc:subject:message-id:mime-version :content-type:content-disposition:user-agent; bh=uqoNHQXdNI1+nPot57LSVdVs0LTqALbwS040iBXrG+0=; b=bH7oyCc2s+15ZTINR8PLptaroR7AKkpoE+xNGoU7VtAj8iEe8VIfBt37KU94CpVa/F 69lKnfvm/2H1eA9TmQzB/EHrhFUJHzsS6QTiKHnDR6S4aHKYKWdoUZjC8SguA8DLkEDw 1lg1sBEpfQKFWhqMg0sUr/KOOAW7tQv8uOU9ijjKu2R6+LAmrZykCHQTgVV8/GfqLkhM McJ3/0atvqWMKwYUjatUPh1jBq/4xwBN97gMNA37EDzBAmWyugnEZPPakNDT26hNZeca FL2H9M1CMeWW/ir785o5KctfZtgPetPt3fSwYztv+Dw+5fLcJ4Qnk/B17tEJdVLF4vsE +k4g== X-Received: by 10.14.9.71 with SMTP id 47mr5666523ees.21.1367399005847; Wed, 01 May 2013 02:03:25 -0700 (PDT) Received: from localhost ([178.150.115.244]) by mx.google.com with ESMTPSA id j43sm2584574eep.4.2013.05.01.02.03.24 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 01 May 2013 02:03:24 -0700 (PDT) Sender: Mikolaj Golub Date: Wed, 1 May 2013 12:03:22 +0300 From: Mikolaj Golub To: freebsd-fs@freebsd.org Subject: bsnmpd(1) module for HAST Message-ID: <20130501090320.GA5964@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Hartmut Brandt X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 May 2013 09:03:28 -0000 Hi, I would like to commit this bsnmpd module for HAST, if there is no objections: http://people.freebsd.org/~trociny/snmp_hast.1.patch Example: kopusha:~% snmpwalk -c geheim -v2c localhost:11111 BEGEMOT-HAST-MIB::begemotHast BEGEMOT-HAST-MIB::hastConfigFile.0 = STRING: "/tmp/hast.conf.101" BEGEMOT-HAST-MIB::hastResourceIndex.0 = INTEGER: 0 BEGEMOT-HAST-MIB::hastResourceName.0 = STRING: "test" BEGEMOT-HAST-MIB::hastResourceRole.0 = INTEGER: primary(2) BEGEMOT-HAST-MIB::hastResourceProvName.0 = STRING: "test" BEGEMOT-HAST-MIB::hastResourceLocalPath.0 = STRING: "/dev/md101" BEGEMOT-HAST-MIB::hastResourceExtentSize.0 = INTEGER: 2097152 BEGEMOT-HAST-MIB::hastResourceKeepDirty.0 = INTEGER: 64 BEGEMOT-HAST-MIB::hastResourceRemoteAddr.0 = STRING: "kopusha:7772" BEGEMOT-HAST-MIB::hastResourceSourceAddr.0 = "" BEGEMOT-HAST-MIB::hastResourceReplication.0 = INTEGER: memsync(1) BEGEMOT-HAST-MIB::hastResourceStatus.0 = INTEGER: complete(0) BEGEMOT-HAST-MIB::hastResourceDirty.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceReads.0 = Counter64: 101 BEGEMOT-HAST-MIB::hastResourceWrites.0 = Counter64: 554 BEGEMOT-HAST-MIB::hastResourceDeletes.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceFlushes.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceActivemapUpdates.0 = Counter64: 31 BEGEMOT-HAST-MIB::hastResourceReadErrors.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceWriteErrors.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceDeleteErrors.0 = Counter64: 0 BEGEMOT-HAST-MIB::hastResourceFlushErrors.0 = Counter64: 0 kopusha:~% snmpset -c geheim -v2c localhost:11111 BEGEMOT-HAST-MIB::hastResourceRole.0 = 1 BEGEMOT-HAST-MIB::hastResourceRole.0 = INTEGER: init(1) -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Wed May 1 17:51:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 316276C6; Wed, 1 May 2013 17:51:47 +0000 (UTC) (envelope-from pawel@dawidek.net) Received: from mail.dawidek.net (garage.dawidek.net [91.121.88.72]) by mx1.freebsd.org (Postfix) with ESMTP id F15DD1C39; Wed, 1 May 2013 17:51:46 +0000 (UTC) Received: from localhost (89-73-195-149.dynamic.chello.pl [89.73.195.149]) by mail.dawidek.net (Postfix) with ESMTPSA id 0321F6E8; Wed, 1 May 2013 19:47:53 +0200 (CEST) Date: Wed, 1 May 2013 19:54:07 +0200 From: Pawel Jakub Dawidek To: Mikolaj Golub Subject: Re: bsnmpd(1) module for HAST Message-ID: <20130501175407.GC1374@garage.freebsd.pl> References: <20130501090320.GA5964@gmail.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="kfjH4zxOES6UT95V" Content-Disposition: inline In-Reply-To: <20130501090320.GA5964@gmail.com> X-OS: FreeBSD 10.0-CURRENT amd64 User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, Hartmut Brandt X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 May 2013 17:51:47 -0000 --kfjH4zxOES6UT95V Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, May 01, 2013 at 12:03:22PM +0300, Mikolaj Golub wrote: > Hi, >=20 > I would like to commit this bsnmpd module for HAST, if there is no > objections: >=20 > http://people.freebsd.org/~trociny/snmp_hast.1.patch >=20 > Example: >=20 > kopusha:~% snmpwalk -c geheim -v2c localhost:11111 BEGEMOT-HAST-MIB::bege= motHast > BEGEMOT-HAST-MIB::hastConfigFile.0 =3D STRING: "/tmp/hast.conf.101" > BEGEMOT-HAST-MIB::hastResourceIndex.0 =3D INTEGER: 0 > BEGEMOT-HAST-MIB::hastResourceName.0 =3D STRING: "test" > BEGEMOT-HAST-MIB::hastResourceRole.0 =3D INTEGER: primary(2) > BEGEMOT-HAST-MIB::hastResourceProvName.0 =3D STRING: "test" > BEGEMOT-HAST-MIB::hastResourceLocalPath.0 =3D STRING: "/dev/md101" > BEGEMOT-HAST-MIB::hastResourceExtentSize.0 =3D INTEGER: 2097152 > BEGEMOT-HAST-MIB::hastResourceKeepDirty.0 =3D INTEGER: 64 > BEGEMOT-HAST-MIB::hastResourceRemoteAddr.0 =3D STRING: "kopusha:7772" > BEGEMOT-HAST-MIB::hastResourceSourceAddr.0 =3D "" > BEGEMOT-HAST-MIB::hastResourceReplication.0 =3D INTEGER: memsync(1) > BEGEMOT-HAST-MIB::hastResourceStatus.0 =3D INTEGER: complete(0) > BEGEMOT-HAST-MIB::hastResourceDirty.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceReads.0 =3D Counter64: 101 > BEGEMOT-HAST-MIB::hastResourceWrites.0 =3D Counter64: 554 > BEGEMOT-HAST-MIB::hastResourceDeletes.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceFlushes.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceActivemapUpdates.0 =3D Counter64: 31 > BEGEMOT-HAST-MIB::hastResourceReadErrors.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceWriteErrors.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceDeleteErrors.0 =3D Counter64: 0 > BEGEMOT-HAST-MIB::hastResourceFlushErrors.0 =3D Counter64: 0 >=20 > kopusha:~% snmpset -c geheim -v2c localhost:11111 BEGEMOT-HAST-MIB::hastR= esourceRole.0 =3D 1 > BEGEMOT-HAST-MIB::hastResourceRole.0 =3D INTEGER: init(1) LGTM:) --=20 Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://mobter.com --kfjH4zxOES6UT95V Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlGBVr8ACgkQForvXbEpPzRSNACgi0xl0NDDqHJW8n01WoZli5s7 U3cAoJQ2yKKc5yoycbJvb1wM6uqdMfgw =Z/F9 -----END PGP SIGNATURE----- --kfjH4zxOES6UT95V-- From owner-freebsd-fs@FreeBSD.ORG Thu May 2 01:19:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0DABF7F9 for ; Thu, 2 May 2013 01:19:31 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id C80EE1D91 for ; Thu, 2 May 2013 01:19:30 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAHG+gVGDaFvO/2dsb2JhbABSgz6DN7tqgRd0gh8BAQUjVhkCGAICDRkCWQYTiAyTXZsUkEWBI4w9EH4BMweCP4ETA5cpkTGDKSCBNjU X-IronPort-AV: E=Sophos;i="4.87,592,1363147200"; d="scan'208";a="26357286" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 01 May 2013 21:19:28 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 05B4FB3F13; Wed, 1 May 2013 21:19:29 -0400 (EDT) Date: Wed, 1 May 2013 21:19:28 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <531F8BBE-1476-4591-BABD-EA6B230ADB44@hub.org> Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Linux)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 01:19:31 -0000 Marc G. Fournier wrote: > On 2013-04-30, at 05:29 , Rick Macklem < rmacklem@uoguelph.ca > wrote: >=20 >=20 > Marc G. Fournier wrote: >=20 >=20 > If you want, you can email me startup.pcap as an attachment and I'll > take > a look, but wireshark is pretty good at spotting TCP retransmits, etc. >=20 >=20 > 'k, at 4.x Gig in size, doubt your mail server will handle me sending > this to you :) Even compressed, it was going over 400M =E2=80=A6 I can if= you > want it though =E2=80=A6 >=20 >=20 > Baring that, if you want to give me pointers as to what I should be > looking for? I have Wireshark installed / and the startup.pcap data > loaded =E2=80=A6 the only thing that is jumping out at me is a bunch of l= ines > where 'Length' is 32982 while most are 218 =E2=80=A6 highlighted in 'blac= k > background, red font' =E2=80=A6 for example: >=20 > The big one is a write RPC and it will be a little more than 32768, if > you've set wsize=3D32768. >=20 > This can't be a capture for the "nfsstat" numbers you emailed the last > time. > (For one thing, that one didn't have any write RPCs counted.) Try and > get > a capture for the case where there are few NFS RPCs. (Did you capture > for > the first time doing the startup after doing a mount vs do an > "nfsstat" > for a subsequent startup?) Or, is this client doing something else on > the > network while the startup is happening? >=20 > I may take a look at it, to see if I can spot anything weird, but a > capture > when it only does 88 RPCs is going to be much easier to look at. >=20 >=20 >=20 > 'k, here is what I just ran now =E2=80=A6 igb1 is the private IP network = where > the NFS mount is running from =E2=80=A6 host 192.168.1.5 is the host IP o= f the > server that I'm using for testing =E2=80=A6 and the results again have WR= ITE > calls in it =E2=80=A6 >=20 >=20 > note that when I run the tcpdump command, I'm in /tmp on the local > file system, so I'm not writing that log to the NFS server =E2=80=A6 and = there > is nothing else running on the NFS mount, since all tha tis on it is > /usr/local/jboss=E2=80=A6 =E2=80=A6 the rest of /usr/local is on the loca= l drives also > =E2=80=A6 the WRITEs are to /usr/local/jboss/standalone/logs =E2=80=A6 so= not sure why > RPC Counts for Writes is showing 0 change =E2=80=A6 >=20 >=20 > I'm building a new bz2 file right now, but it looks pretty similar to > the one I already sent you a URL for ... >=20 >=20 >=20 > root@server04:/tmp # nfsstat -c; tcpdump -i igb1 -s 0 -w startup.pcap > host 192.168.1.5 > Client Info: > Rpc Counts: > Getattr Setattr Lookup Readlink Read Write Create Remove > 2746536 821481 974263 18 2230948 2098303 160726 4954 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 1862 0 0 14724 950 16272 0 330261 > Mknod Fsstat Fsinfo PathConf Commit > 12 30926 6 0 0 > Rpc Info: > TimedOut Invalid X Replies Retries Requests > 0 0 0 0 9432366 > Cache Info: > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > 26331524 2746535 20540473 974231 2380075 2225802 2618800 2097243 > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > 1262 18 46863 15678 29941 0 22516391 330264 > tcpdump: listening on igb1, link-type EN10MB (Ethernet), capture size > 65535 bytes > ^C5454815 packets captured > 5458144 packets received by filter > 0 packets dropped by kernel >=20 >=20 >=20 >=20 > root@server04:/tmp # nfsstat -c > Client Info: > Rpc Counts: > Getattr Setattr Lookup Readlink Read Write Create Remove > 2746603 821481 974276 18 2230953 2098303 160726 4954 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 1862 0 0 14724 950 16272 0 330275 > Mknod Fsstat Fsinfo PathConf Commit > 12 30926 6 0 0 > Rpc Info: > TimedOut Invalid X Replies Retries Requests > 0 0 0 0 9432465 > Cache Info: > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > 26332545 2746602 20540712 974244 2380801 2225807 2618800 2097243 > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > 1262 18 46863 15678 29941 0 22516700 330278 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 >=20 > rick >=20 >=20 >=20 >=20 >=20 >=20 >=20 > i am doing a bzip2 compressed file right now, and will make it > available via HTTP, if you are interested =E2=80=A6 >=20 >=20 >=20 >=20 >=20 >=20 > The only other thing I can suggest is taking the "soft,intr" options > off > your mount and see if that has any effect. Maybe some syscall is > returning > EINTR and confusing jboss? >=20 >=20 > Tried removing soft,intr =E2=80=A6 no change, still around 240s =E2=80=A6 Well, I looked at the packet capture and, for some reason, it repeatedly does a write of 1 byte to a file, followed by a read of that file, over and over and ... again. I have no idea why the app. does that. I also don't know why the "nfsstat -c" is bogus. Are you using "-t oldnfs" for the mount for these runs by any chance? If so, you need to use "nfsstat -o -c". rick ps: To be honest, I doubt that anything can be done to speed this up, except to move it to a local disk. From owner-freebsd-fs@FreeBSD.ORG Thu May 2 01:29:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 2DC668B3 for ; Thu, 2 May 2013 01:29:46 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id BEBEC1DC1 for ; Thu, 2 May 2013 01:29:45 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAJ7AgVGDaFvO/2dsb2JhbABSgz6DN7tqgRd0gh8BAQEDAQEBASArIAsFFAIYAgINGQIpAQkmBggHBAEcBIdlBgyTT5sUkESBI4w9EH4BMweCP4ETA5RngkKBJpALgykgMoEENQ X-IronPort-AV: E=Sophos;i="4.87,592,1363147200"; d="scan'208";a="26358041" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 01 May 2013 21:29:44 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 60ACEB4049; Wed, 1 May 2013 21:29:44 -0400 (EDT) Date: Wed, 1 May 2013 21:29:44 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 01:29:46 -0000 I wrote: > Marc G. Fournier wrote: > > On 2013-04-30, at 05:29 , Rick Macklem < rmacklem@uoguelph.ca > > > wrote: > > > > > > Marc G. Fournier wrote: > > > > > > If you want, you can email me startup.pcap as an attachment and I'll > > take > > a look, but wireshark is pretty good at spotting TCP retransmits, > > etc. > > > > > > 'k, at 4.x Gig in size, doubt your mail server will handle me > > sending > > this to you :) Even compressed, it was going over 400M =E2=80=A6 I can = if > > you > > want it though =E2=80=A6 > > > > > > Baring that, if you want to give me pointers as to what I should be > > looking for? I have Wireshark installed / and the startup.pcap data > > loaded =E2=80=A6 the only thing that is jumping out at me is a bunch of > > lines > > where 'Length' is 32982 while most are 218 =E2=80=A6 highlighted in 'bl= ack > > background, red font' =E2=80=A6 for example: > > > > The big one is a write RPC and it will be a little more than 32768, > > if > > you've set wsize=3D32768. > > > > This can't be a capture for the "nfsstat" numbers you emailed the > > last > > time. > > (For one thing, that one didn't have any write RPCs counted.) Try > > and > > get > > a capture for the case where there are few NFS RPCs. (Did you > > capture > > for > > the first time doing the startup after doing a mount vs do an > > "nfsstat" > > for a subsequent startup?) Or, is this client doing something else > > on > > the > > network while the startup is happening? > > > > I may take a look at it, to see if I can spot anything weird, but a > > capture > > when it only does 88 RPCs is going to be much easier to look at. > > > > > > > > 'k, here is what I just ran now =E2=80=A6 igb1 is the private IP networ= k > > where > > the NFS mount is running from =E2=80=A6 host 192.168.1.5 is the host IP= of > > the > > server that I'm using for testing =E2=80=A6 and the results again have = WRITE > > calls in it =E2=80=A6 > > > > > > note that when I run the tcpdump command, I'm in /tmp on the local > > file system, so I'm not writing that log to the NFS server =E2=80=A6 an= d > > there > > is nothing else running on the NFS mount, since all tha tis on it is > > /usr/local/jboss=E2=80=A6 =E2=80=A6 the rest of /usr/local is on the lo= cal drives > > also > > =E2=80=A6 the WRITEs are to /usr/local/jboss/standalone/logs =E2=80=A6 = so not sure > > why > > RPC Counts for Writes is showing 0 change =E2=80=A6 > > > > > > I'm building a new bz2 file right now, but it looks pretty similar > > to > > the one I already sent you a URL for ... > > > > > > > > root@server04:/tmp # nfsstat -c; tcpdump -i igb1 -s 0 -w > > startup.pcap > > host 192.168.1.5 > > Client Info: > > Rpc Counts: > > Getattr Setattr Lookup Readlink Read Write Create Remove > > 2746536 821481 974263 18 2230948 2098303 160726 4954 > > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > > 1862 0 0 14724 950 16272 0 330261 > > Mknod Fsstat Fsinfo PathConf Commit > > 12 30926 6 0 0 > > Rpc Info: > > TimedOut Invalid X Replies Retries Requests > > 0 0 0 0 9432366 > > Cache Info: > > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > > 26331524 2746535 20540473 974231 2380075 2225802 2618800 2097243 > > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > > 1262 18 46863 15678 29941 0 22516391 330264 > > tcpdump: listening on igb1, link-type EN10MB (Ethernet), capture > > size > > 65535 bytes > > ^C5454815 packets captured > > 5458144 packets received by filter > > 0 packets dropped by kernel > > > > > > > > > > root@server04:/tmp # nfsstat -c > > Client Info: > > Rpc Counts: > > Getattr Setattr Lookup Readlink Read Write Create Remove > > 2746603 821481 974276 18 2230953 2098303 160726 4954 > > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > > 1862 0 0 14724 950 16272 0 330275 > > Mknod Fsstat Fsinfo PathConf Commit > > 12 30926 6 0 0 > > Rpc Info: > > TimedOut Invalid X Replies Retries Requests > > 0 0 0 0 9432465 > > Cache Info: > > Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses > > 26332545 2746602 20540712 974244 2380801 2225807 2618800 2097243 > > BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses > > 1262 18 46863 15678 29941 0 22516700 330278 > > > > > > > > > > > > > > > > > > > > > > > > rick > > > > > > > > > > > > > > > > i am doing a bzip2 compressed file right now, and will make it > > available via HTTP, if you are interested =E2=80=A6 > > > > > > > > > > > > > > The only other thing I can suggest is taking the "soft,intr" options > > off > > your mount and see if that has any effect. Maybe some syscall is > > returning > > EINTR and confusing jboss? > > > > > > Tried removing soft,intr =E2=80=A6 no change, still around 240s =E2=80= =A6 > Well, I looked at the packet capture and, for some reason, it > repeatedly does a write of 1 byte to a file, followed by a read of > that file, over and over and ... again. I have no idea why the > app. does that. >=20 Oh, one more mount option you could try is "nocto". If the app. is repeatedly closing/opening the file, that might explain the repeated "write 1 byte; read some of the file"? rick > I also don't know why the "nfsstat -c" is bogus. Are you using "-t > oldnfs" > for the mount for these runs by any chance? If so, you need to use > "nfsstat -o -c". >=20 > rick > ps: To be honest, I doubt that anything can be done to speed this up, > except to move it to a local disk. >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu May 2 16:38:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id DBB5A78 for ; Thu, 2 May 2013 16:38:26 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id A767917FE for ; Thu, 2 May 2013 16:38:26 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id A64A41F8AE2B; Thu, 2 May 2013 13:38:17 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 02608-09; Thu, 2 May 2013 16:38:17 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 996531F8AE2A; Thu, 2 May 2013 13:38:16 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> Date: Thu, 2 May 2013 09:38:15 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <8477F3D2-886C-4E0C-B44B-490C214FDFEE@hub.org> References: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 16:38:26 -0000 On 2013-05-01, at 18:29 , Rick Macklem wrote: >>=20 >> rick >> ps: To be honest, I doubt that anything can be done to speed this up, >> except to move it to a local disk. Well, that definitely improves things =85 so FreeBSD on local file = system is ~5s faster then Linux on NFS =85 From owner-freebsd-fs@FreeBSD.ORG Thu May 2 17:43:30 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 935F8733 for ; Thu, 2 May 2013 17:43:30 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 6255F1C8C for ; Thu, 2 May 2013 17:43:30 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id 3214C1F8AE2B; Thu, 2 May 2013 14:43:29 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 30090-08; Thu, 2 May 2013 17:43:28 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 750381F8AE2A; Thu, 2 May 2013 14:43:28 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> Date: Thu, 2 May 2013 10:43:25 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <44FC8563-AF8D-47F9-A9A8-A4FE57FFC444@hub.org> References: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 17:43:30 -0000 On 2013-05-01, at 18:29 , Rick Macklem wrote: >>=20 > Oh, one more mount option you could try is "nocto". If the app. is > repeatedly closing/opening the file, that might explain the repeated > "write 1 byte; read some of the file"? cto vs nocto made no difference =85 but, am doing some compares between = oldnfs and nfs =85 it looks like oldnfs cuts off about 60s from the = start time, but want to do a few runs =85 Of note, I reformatted the Linux box with OpenBSD (god, what a nightmare = its ports system is) and OpenBSD startup times are ~180s =85 I'd like to = know what Linux is doing to get 'near local drive' start up times = though, and what risk is associated with it =85=20 From owner-freebsd-fs@FreeBSD.ORG Thu May 2 18:46:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 620EB364 for ; Thu, 2 May 2013 18:46:45 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id D8DAB101F for ; Thu, 2 May 2013 18:46:44 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id 68F681F8AE2B; Thu, 2 May 2013 15:46:43 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 61380-05; Thu, 2 May 2013 18:46:42 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 4AAAE1F8AE2A; Thu, 2 May 2013 15:46:42 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: oldnfs vs nfs ( Was: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) ) From: "Marc G. Fournier" In-Reply-To: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> Date: Thu, 2 May 2013 11:46:40 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <971747745.58619.1367458184334.JavaMail.root@erie.cs.uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 18:46:45 -0000 Okay, I think I can make the pitch against Linux based on concerns as to = what exactly it is doing to get 'near local drive' performance on start = up of jboss =85 it has to be short circuiting something in favour of = speed =85 and this project (to us) is such that any reduction in risk of = data loss is desirable =85 we have a lot of naysayers against us =85 But, that said, the newnfs code does appear to be slower then the oldnfs = =85 just over 60s slower to start up =85 is that to be expected? OLDNFS: JBoss AS 7.1.1.Final "Brontes" started in 246381ms root@server04:/usr/local/jboss-as-7.1.1.Final # nfsstat -o -c Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 237199 5 17224 0 233935 234240 3743 = 1 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 0 0 0 307 0 71 0 = 8420 Mknod Fsstat Fsinfo PathConf Commit 0 467 0 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 735611 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 712242 237197 527641 17224 -100768 233774 13164 = 234240 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 0 0 934 71 467 0 544719 = 8420 NEWNFS: JBoss AS 7.1.1.Final "Brontes" started in 305919ms root@server04:~ # nfsstat -c Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create = Remove 236644 5 17306 0 230891 231140 3743 = 1 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus = Access 0 0 0 307 0 71 0 = 8481 Mknod Fsstat Fsinfo PathConf Commit 0 531 0 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 729116 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits = Misses 717990 236647 530738 17306 -101086 230812 13164 = 231140 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits = Misses 0 0 1087 55 531 0 548059 = 8481 Both using same mount options *other then* oldnfs vs nfs =85 From owner-freebsd-fs@FreeBSD.ORG Thu May 2 20:53:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6CA38F13 for ; Thu, 2 May 2013 20:53:31 +0000 (UTC) (envelope-from lkchen@k-state.edu) Received: from ksu-out.merit.edu (ksu-out.merit.edu [207.75.117.133]) by mx1.freebsd.org (Postfix) with ESMTP id 397EE1508 for ; Thu, 2 May 2013 20:53:30 +0000 (UTC) X-Merit-ExtLoop1: 1 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgAFAFDRglHPS3TT/2dsb2JhbABSgweDbrxwFnSCHwEBBSNWDA8aAg0ZAlkGE4gMsFGJUYcxgSOQSoETA6hdgymCCw X-IronPort-AV: E=Sophos;i="4.87,599,1363147200"; d="scan'208";a="224173671" X-MERIT-SOURCE: KSU Received: from ksu-sfpop-mailstore02.merit.edu ([207.75.116.211]) by sfpop-ironport02.merit.edu with ESMTP; 02 May 2013 16:52:21 -0400 Date: Thu, 2 May 2013 16:52:21 -0400 (EDT) From: "Lawrence K. Chen, P.Eng." To: Rainer Duffner Message-ID: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> In-Reply-To: <44CB04ED-33CF-40A6-A344-25CD7F5CCC32@ultra-secure.de> Subject: Re: NFS Performance issue against NetApp MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [129.130.0.181] X-Mailer: Zimbra 7.2.2_GA_2852 (ZimbraWebClient - GC25 ([unknown])/7.2.2_GA_2852) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 20:53:31 -0000 Yeah, I didn't have any problems with FreeBSD 9.0 on G7, the boss didn't like the lack of passthru and having to configure a bunch of raid 0 luns for each disk with the SmartArray P410i...so he was going through everything putting in the LSI SAS 2008s, and decided while he was at it to switch to all Intel EXPI9402PT cards....it might be because of the G7's that are doing SmartOS. He swapped out all the memory.... Joked that he was replacing everything except for the case.... ----- Original Message ----- > > Am 24.04.2013 um 23:29 schrieb "Lawrence K. Chen, P.Eng." > : > > > Hmmm, I guess all our Gen8's have been for the new vCloud project. > > But, a few months ago boss had gone to putting LSI SAS 2008 and > > Intel EXPI9402PT cards into our other Proliants (DL380 G7's and > > DL180 G6's). Currently the only in production FreeBSD server > > (9.1) is on a DL180 G6. I was working on a DL380 G7, but I lost > > that hardware to a different project. > > > > > G6 and G7 is no problem. At least DL360 + DL380, which we use > (almost) exclusively. > The onboard-NICs are supposed to be swappable for something else - > but there aren't any useful modules yet (a 10G module is available). > > > > From owner-freebsd-fs@FreeBSD.ORG Thu May 2 21:05:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0286F5A7 for ; Thu, 2 May 2013 21:05:45 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id C444C15CD for ; Thu, 2 May 2013 21:05:44 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id 7911B1F8AE2B; Thu, 2 May 2013 18:05:43 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 30340-10; Thu, 2 May 2013 21:05:41 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id A097F1F8AE2A; Thu, 2 May 2013 18:05:40 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> Date: Thu, 2 May 2013 14:05:38 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> To: "Lawrence K. Chen, P.Eng." X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 21:05:45 -0000 On 2013-05-02, at 13:52 , "Lawrence K. Chen, P.Eng." = wrote: > Yeah, I didn't have any problems with FreeBSD 9.0 on G7, the boss = didn't like the lack of passthru and having to configure a bunch of raid = 0 luns for each disk with the SmartArray P410i...so he was going through = everything putting in the LSI SAS 2008s, and decided while he was at it = to switch to all Intel EXPI9402PT cards....it might be because of the = G7's that are doing SmartOS. He swapped out all the memory=85. =20 I tried Intel vs Broadcom, and didn't notice any difference =85 New NFS = is slower then Old NFS, but that's just a difference of a 5m start up vs = a 4m start up =85 even OpenBSD is faster by ~25% "out of the box" =85 The thing is, I'm not convinced it is a NFS related issue =85 there are = *so* many other variables involved =85 it could be something with the = network stack =85 it could be something with the scheduler =85 it could = be =85 hell, it could be like the guy states in that blog posting = (http://antibsd.wordpress.com/) and be the compiler changes =85=20 I found this in my searches that talks about how much CPU on the NetAPP = side is used when using a FreeBSD client over Linux: = http://www.makingitscale.com/2012/freebsd-linux-nfs-and-the-attribute-cach= e.html My big question is why is Linux so much less aggressive then FreeBSD in = this guys tests? Is the Linux implementation "skipping" something in = their processing? Are we doing something that is "optional", but for = completeness, we've implemented it while they've chosen to leave it out? There has to be something to explain such dramatic differences =85 :( >=20 > Joked that he was replacing everything except for the case.... >=20 > ----- Original Message ----- >>=20 >> Am 24.04.2013 um 23:29 schrieb "Lawrence K. Chen, P.Eng." >> : >>=20 >>> Hmmm, I guess all our Gen8's have been for the new vCloud project. >>> But, a few months ago boss had gone to putting LSI SAS 2008 and >>> Intel EXPI9402PT cards into our other Proliants (DL380 G7's and >>> DL180 G6's). Currently the only in production FreeBSD server >>> (9.1) is on a DL180 G6. I was working on a DL380 G7, but I lost >>> that hardware to a different project. >>>=20 >>=20 >>=20 >> G6 and G7 is no problem. At least DL360 + DL380, which we use >> (almost) exclusively. >> The onboard-NICs are supposed to be swappable for something else - >> but there aren't any useful modules yet (a 10G module is available). >>=20 >>=20 >>=20 >>=20 From owner-freebsd-fs@FreeBSD.ORG Thu May 2 22:11:58 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 30355FE8 for ; Thu, 2 May 2013 22:11:58 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id EFB3F1817 for ; Thu, 2 May 2013 22:11:57 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAE7jglGDaFvO/2dsb2JhbABSgz6DN7tygRV0gh8BAQEDAQEBASArIAsFBw8YAgINGQIpAQkmBggHBAEcBIdlBgywMJEEgSOMUX4BMweCQIETA5RpgkKBJpAMgykgMoEENQ X-IronPort-AV: E=Sophos;i="4.87,599,1363147200"; d="scan'208";a="28203171" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 02 May 2013 18:11:56 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id F2E12B3F23; Thu, 2 May 2013 18:11:56 -0400 (EDT) Date: Thu, 2 May 2013 18:11:56 -0400 (EDT) From: Rick Macklem To: "Marc G. Fournier" Message-ID: <1911462019.88361.1367532716981.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> Subject: Re: NFS Performance issue against NetApp MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 22:11:58 -0000 Marc G. Fournier wrote: > On 2013-05-02, at 13:52 , "Lawrence K. Chen, P.Eng." > wrote: >=20 > > Yeah, I didn't have any problems with FreeBSD 9.0 on G7, the boss > > didn't like the lack of passthru and having to configure a bunch of > > raid 0 luns for each disk with the SmartArray P410i...so he was > > going through everything putting in the LSI SAS 2008s, and decided > > while he was at it to switch to all Intel EXPI9402PT cards....it > > might be because of the G7's that are doing SmartOS. He swapped out > > all the memory=E2=80=A6. >=20 > I tried Intel vs Broadcom, and didn't notice any difference =E2=80=A6 New= NFS > is slower then Old NFS, but that's just a difference of a 5m start up > vs a 4m start up =E2=80=A6 even OpenBSD is faster by ~25% "out of the box= " =E2=80=A6 >=20 > The thing is, I'm not convinced it is a NFS related issue =E2=80=A6 there= are > *so* many other variables involved =E2=80=A6 it could be something with t= he > network stack =E2=80=A6 it could be something with the scheduler =E2=80= =A6 it could be > =E2=80=A6 hell, it could be like the guy states in that blog posting > (http://antibsd.wordpress.com/) and be the compiler changes =E2=80=A6 >=20 > I found this in my searches that talks about how much CPU on the > NetAPP side is used when using a FreeBSD client over Linux: >=20 > http://www.makingitscale.com/2012/freebsd-linux-nfs-and-the-attribute-cac= he.html >=20 A little off topic, but this guy reports the client as doing Access RPCs. There is a sysctl called vfs.nfs.prime_access_cache. If you set that to 0, the client will use Getattr RPCs instead of Access RPCs. This was put in specifically for Netapp Filers, since their server implemen= tation for Access results in much higher overheads than Getattr. (An Access reply includes attributes and access stuff that can be used to prime both caches, so it makes sense to do Access instead of Getattr when the server overheads are about the same for both.) rick > My big question is why is Linux so much less aggressive then FreeBSD > in this guys tests? Is the Linux implementation "skipping" something > in their processing? Are we doing something that is "optional", but > for completeness, we've implemented it while they've chosen to leave > it out? >=20 > There has to be something to explain such dramatic differences =E2=80=A6 = :( >=20 >=20 > > > > Joked that he was replacing everything except for the case.... > > > > ----- Original Message ----- > >> > >> Am 24.04.2013 um 23:29 schrieb "Lawrence K. Chen, P.Eng." > >> : > >> > >>> Hmmm, I guess all our Gen8's have been for the new vCloud project. > >>> But, a few months ago boss had gone to putting LSI SAS 2008 and > >>> Intel EXPI9402PT cards into our other Proliants (DL380 G7's and > >>> DL180 G6's). Currently the only in production FreeBSD server > >>> (9.1) is on a DL180 G6. I was working on a DL380 G7, but I lost > >>> that hardware to a different project. > >>> > >> > >> > >> G6 and G7 is no problem. At least DL360 + DL380, which we use > >> (almost) exclusively. > >> The onboard-NICs are supposed to be swappable for something else - > >> but there aren't any useful modules yet (a 10G module is > >> available). > >> > >> > >> > >> >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu May 2 22:39:21 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id AF84DA44 for ; Thu, 2 May 2013 22:39:21 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 90AEF1A62 for ; Thu, 2 May 2013 22:39:21 +0000 (UTC) Received: from peevish.spa.umn.edu ([128.101.220.230]) by mail.physics.umn.edu with esmtp (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UY1q5-0004tM-EX; Thu, 02 May 2013 17:18:57 -0500 Received: by peevish.spa.umn.edu (Postfix, from userid 5000) id 6131D639; Thu, 2 May 2013 17:18:57 -0500 (CDT) Date: Thu, 2 May 2013 17:18:57 -0500 From: Graham Allan To: "Marc G. Fournier" Subject: Re: NFS Performance issue against NetApp Message-ID: <20130502221857.GJ32659@physics.umn.edu> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> User-Agent: Mutt/1.5.20 (2009-12-10) Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 22:39:21 -0000 On Thu, May 02, 2013 at 02:05:38PM -0700, Marc G. Fournier wrote: >=20 > The thing is, I'm not convinced it is a NFS related issue =E2=80=A6 the= re are *so* many other variables involved =E2=80=A6 it could be something= with the network stack =E2=80=A6 it could be something with the schedule= r =E2=80=A6 it could be =E2=80=A6 hell, it could be like the guy states i= n that blog posting (http://antibsd.wordpress.com/) and be the compiler c= hanges =E2=80=A6=20 I'm just watching interestedly from the sidelines, and I hesitate to ask because it seems too obvious - maybe I missed something - but have you run both tests (Linux and FreeBSD) purely with local disk, to get a baseline independent of NFS? Graham From owner-freebsd-fs@FreeBSD.ORG Thu May 2 23:08:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id F0631DD2 for ; Thu, 2 May 2013 23:08:20 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from noop.in-addr.com (mail.in-addr.com [IPv6:2001:470:8:162::1]) by mx1.freebsd.org (Postfix) with ESMTP id C70A51B71 for ; Thu, 2 May 2013 23:08:20 +0000 (UTC) Received: from gjp by noop.in-addr.com with local (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UY2bq-0005g8-5C; Thu, 02 May 2013 19:08:18 -0400 Date: Thu, 2 May 2013 19:08:17 -0400 From: Gary Palmer To: Rick Macklem Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) Message-ID: <20130502230817.GA10891@in-addr.com> References: <531F8BBE-1476-4591-BABD-EA6B230ADB44@hub.org> <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on noop.in-addr.com); SAEximRunCond expanded to false Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 23:08:21 -0000 On Wed, May 01, 2013 at 09:19:28PM -0400, Rick Macklem wrote: > Well, I looked at the packet capture and, for some reason, it > repeatedly does a write of 1 byte to a file, followed by a read of > that file, over and over and ... again. I have no idea why the > app. does that. It might be worth running the Linux Java under FreeBSD Linux emulation to see if it is the app that is doing that or something in Java that doesn't like the FreeBSD version for some reason. Note that I have no idea how Java ports to different platforms work, but the suggestion that it is doing something whacky makes me wonder. Gary From owner-freebsd-fs@FreeBSD.ORG Thu May 2 23:43:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4D74E1E1 for ; Thu, 2 May 2013 23:43:23 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 1BE3D1C4C for ; Thu, 2 May 2013 23:43:22 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id 1269E1F8AE2B; Thu, 2 May 2013 20:43:21 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 10003-02; Thu, 2 May 2013 23:43:20 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 295D31F8AE2A; Thu, 2 May 2013 20:43:19 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: <20130502221857.GJ32659@physics.umn.edu> Date: Thu, 2 May 2013 16:43:17 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> To: Graham Allan X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 23:43:23 -0000 On 2013-05-02, at 15:18 , Graham Allan wrote: > On Thu, May 02, 2013 at 02:05:38PM -0700, Marc G. Fournier wrote: >>=20 >> The thing is, I'm not convinced it is a NFS related issue =85 there = are *so* many other variables involved =85 it could be something with = the network stack =85 it could be something with the scheduler =85 it = could be =85 hell, it could be like the guy states in that blog posting = (http://antibsd.wordpress.com/) and be the compiler changes =85=20 >=20 > I'm just watching interestedly from the sidelines, and I hesitate to = ask > because it seems too obvious - maybe I missed something - but have you > run both tests (Linux and FreeBSD) purely with local disk, to get a > baseline independent of NFS? Hadn't thought to do so with Linux, but =85 Linux =85=85. 20732ms, 20117ms, 20935ms, 20130ms, 20560ms FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms In the case of the following, I umount the file system, change the = settings, mount and then run two runs: FreeBSD, nfs, vfs.nfs.prime_access_cache=3D1 =85 279207ms, 273970ms FreeBSD, nfs, vfs.nfs.prime_access_cache=3D0 =85 279254ms, 274667ms FreeBSD, oldnfs, vfs.nfs.prime_access_cache=3D0 =85 244955ms, 243280ms FreeBSD, oldnfs, vfs.nfs.prime_access_cache =3D1 =85 242014ms, 242393ms Default for vfs.nfs.prime_access_cache appears to be 0 =85 From owner-freebsd-fs@FreeBSD.ORG Thu May 2 23:51:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 53FB8297; Thu, 2 May 2013 23:51:07 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 217DF1CA5; Thu, 2 May 2013 23:51:06 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.188]) by hub.org (Postfix) with ESMTP id 5151D1F8AE2B; Thu, 2 May 2013 20:51:06 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.188]) (amavisd-maia, port 10024) with ESMTP id 11756-04; Thu, 2 May 2013 23:51:05 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 273C91F8AE2A; Thu, 2 May 2013 20:51:04 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <20130502230817.GA10891@in-addr.com> Date: Thu, 2 May 2013 16:51:03 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <531F8BBE-1476-4591-BABD-EA6B230ADB44@hub.org> <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> <20130502230817.GA10891@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 23:51:07 -0000 will try and report back tomorrow =85 have to rebuild my kernel as I = don't even have FREEBSD32 support in right now =85 On 2013-05-02, at 16:08 , Gary Palmer wrote: > On Wed, May 01, 2013 at 09:19:28PM -0400, Rick Macklem wrote: >> Well, I looked at the packet capture and, for some reason, it >> repeatedly does a write of 1 byte to a file, followed by a read of >> that file, over and over and ... again. I have no idea why the >> app. does that. >=20 > It might be worth running the Linux Java under FreeBSD Linux emulation > to see if it is the app that is doing that or something in Java that > doesn't like the FreeBSD version for some reason. >=20 > Note that I have no idea how Java ports to different platforms work, > but the suggestion that it is doing something whacky makes me wonder. >=20 > Gary From owner-freebsd-fs@FreeBSD.ORG Fri May 3 00:48:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0BC42B75 for ; Fri, 3 May 2013 00:48:16 +0000 (UTC) (envelope-from mcdouga9@egr.msu.edu) Received: from mail.egr.msu.edu (dauterive.egr.msu.edu [35.9.37.168]) by mx1.freebsd.org (Postfix) with ESMTP id DA2741EB1 for ; Fri, 3 May 2013 00:48:15 +0000 (UTC) Received: from dauterive (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id 50F93427F2 for ; Thu, 2 May 2013 20:39:25 -0400 (EDT) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by dauterive (dauterive.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zcbaUielxrKN for ; Thu, 2 May 2013 20:39:25 -0400 (EDT) Received: from EGR authenticated sender Message-ID: <5183074B.5090004@egr.msu.edu> Date: Thu, 02 May 2013 14:39:39 -1000 From: Adam McDougall User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: NFS Performance issue against NetApp References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> In-Reply-To: <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 00:48:16 -0000 On 5/2/2013 1:43 PM, Marc G. Fournier wrote: > On 2013-05-02, at 15:18 , Graham Allan wrote: > >> On Thu, May 02, 2013 at 02:05:38PM -0700, Marc G. Fournier wrote: >>> The thing is, I'm not convinced it is a NFS related issue there are *so* many other variables involved it could be something with the network stack it could be something with the scheduler it could be hell, it could be like the guy states in that blog posting (http://antibsd.wordpress.com/) and be the compiler changes >> I'm just watching interestedly from the sidelines, and I hesitate to ask >> because it seems too obvious - maybe I missed something - but have you >> run both tests (Linux and FreeBSD) purely with local disk, to get a >> baseline independent of NFS? > Hadn't thought to do so with Linux, but > > Linux . 20732ms, 20117ms, 20935ms, 20130ms, 20560ms > FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms > > In the case of the following, I umount the file system, change the settings, mount and then run two runs: > > FreeBSD, nfs, vfs.nfs.prime_access_cache=1 279207ms, 273970ms > FreeBSD, nfs, vfs.nfs.prime_access_cache=0 279254ms, 274667ms > FreeBSD, oldnfs, vfs.nfs.prime_access_cache=0 244955ms, 243280ms > FreeBSD, oldnfs, vfs.nfs.prime_access_cache =1 242014ms, 242393ms > > Default for vfs.nfs.prime_access_cache appears to be 0 > > My understanding of jboss is it unpacks your war files (or whatever) to a temp deploy dir but essentially tries to run everything from memory. If you replaced a war file, it would usually undeploy and redeploy. Is your jboss extracting the archives to an NFS dir or can you reconfigure or symlink it to extract to a local temp dir when starting up? I can't imagine offhand why it might be useful to store the temp dir on NFS. I would think most of the writes at startup would be to temp files that would be of no use after the jboss java process is stopped. From owner-freebsd-fs@FreeBSD.ORG Fri May 3 02:53:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BAD8CAE4 for ; Fri, 3 May 2013 02:53:19 +0000 (UTC) (envelope-from break19@gmail.com) Received: from mail-wg0-x22a.google.com (mail-wg0-x22a.google.com [IPv6:2a00:1450:400c:c00::22a]) by mx1.freebsd.org (Postfix) with ESMTP id 528671188 for ; Fri, 3 May 2013 02:53:19 +0000 (UTC) Received: by mail-wg0-f42.google.com with SMTP id j13so291858wgh.5 for ; Thu, 02 May 2013 19:53:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:cc:content-type; bh=gSlBF5Su55YhYZoxT2+xE6iG6OIvFPL0gTTayPdsXiU=; b=r6JOp1rcDamPHgLr8uUAh3fENSVq4QaWefYiIJ2TvkWbt0k11I6tJhIKXvli7HNno+ 0PO4PjNBKAHnK/g7Ws12kUEYVz5OrFOtxiTfpEVAbZ7kx0fP7xarGkGFwKcWe6WxtMq2 upToqqUdK/HCzjrrpP1MmBEW3+cLcNt/lHTqAYd8G5zuKsKdzxCxJvoE8ZKEJ7vcPRvp kPwSZFeBbrEfcYkCXaPw9Ixmv+8WCd1ZR2LaUOHe0H7zi3G7FX8LL3pvO3poc0X7GWeE hBF9EM7y0tP2IJA3OQDBub/Eb+izdeq+qmgWc0kGqf0QXuBqC0A8wuLOqzOqIpmaiArY E+Fg== MIME-Version: 1.0 X-Received: by 10.180.39.207 with SMTP id r15mr10883351wik.16.1367549598464; Thu, 02 May 2013 19:53:18 -0700 (PDT) Received: by 10.227.103.138 with HTTP; Thu, 2 May 2013 19:53:18 -0700 (PDT) Received: by 10.227.103.138 with HTTP; Thu, 2 May 2013 19:53:18 -0700 (PDT) In-Reply-To: <5183074B.5090004@egr.msu.edu> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> <5183074B.5090004@egr.msu.edu> Date: Thu, 2 May 2013 21:53:18 -0500 Message-ID: Subject: Re: NFS Performance issue against NetApp From: Chuck Burns Cc: freebsd-fs@freebsd.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 02:53:19 -0000 On May 2, 2013 7:48 PM, "Adam McDougall" wrote: > > On 5/2/2013 1:43 PM, Marc G. Fournier wrote: >> >> On 2013-05-02, at 15:18 , Graham Allan wrote: >> >>> On Thu, May 02, 2013 at 02:05:38PM -0700, Marc G. Fournier wrote: >>>> >>>> The thing is, I'm not convinced it is a NFS related issue =85 there ar= e *so* many other variables involved =85 it could be something with the netwo= rk stack =85 it could be something with the scheduler =85 it could be =85 hell= , it could be like the guy states in that blog posting ( http://antibsd.wordpress.com/) and be the compiler changes =85 >>> >>> I'm just watching interestedly from the sidelines, and I hesitate to as= k >>> because it seems too obvious - maybe I missed something - but have you >>> run both tests (Linux and FreeBSD) purely with local disk, to get a >>> baseline independent of NFS? >> >> Hadn't thought to do so with Linux, but =85 >> >> Linux =85=85. 20732ms, 20117ms, 20935ms, 20130ms, 20560ms >> FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms >> >> In the case of the following, I umount the file system, change the settings, mount and then run two runs: >> >> FreeBSD, nfs, vfs.nfs.prime_access_cache=3D1 =85 279207ms, 273970ms >> FreeBSD, nfs, vfs.nfs.prime_access_cache=3D0 =85 279254ms, 274667ms >> FreeBSD, oldnfs, vfs.nfs.prime_access_cache=3D0 =85 244955ms, 243280ms >> FreeBSD, oldnfs, vfs.nfs.prime_access_cache =3D1 =85 242014ms, 242393ms >> >> Default for vfs.nfs.prime_access_cache appears to be 0 =85 >> >> > My understanding of jboss is it unpacks your war files (or whatever) to a temp deploy dir but essentially tries to run everything from memory. If you replaced a war file, it would usually undeploy and redeploy. Is your jboss extracting the archives to an NFS dir or can you reconfigure or symlink it to extract to a local temp dir when starting up? I can't imagine offhand why it might be useful to store the temp dir on NFS. I would think most of the writes at startup would be to temp files that would be of no use after the jboss java process is stopped. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" Here is another possibility. Most linux distros put /tmp on tmpfs, whereas FreeBSD by default uses actual disk space. From owner-freebsd-fs@FreeBSD.ORG Fri May 3 11:50:12 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 1D12E1E5 for ; Fri, 3 May 2013 11:50:12 +0000 (UTC) (envelope-from feld@feld.me) Received: from new1-smtp.messagingengine.com (new1-smtp.messagingengine.com [66.111.4.221]) by mx1.freebsd.org (Postfix) with ESMTP id E56221B4B for ; Fri, 3 May 2013 11:50:11 +0000 (UTC) Received: from compute3.internal (compute3.nyi.mail.srv.osa [10.202.2.43]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 865861E6A for ; Fri, 3 May 2013 07:50:10 -0400 (EDT) Received: from frontend1.nyi.mail.srv.osa ([10.202.2.160]) by compute3.internal (MEProxy); Fri, 03 May 2013 07:50:10 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=feld.me; h= content-type:to:subject:references:date:mime-version :content-transfer-encoding:from:message-id:in-reply-to; s= mesmtp; bh=t5VArlgu1miJca2tgfjDYeAzhB4=; b=dpAyqGc96hf5mmj32ukTd FqMmUsVQ7blxKOe/9oaD9edDJa6wgtrm5bkAnkZTGAFMstGpB0XYUpe3Tu6hgM2P ZG0zw4NGvHotG5wG5x4wP78evpiotL4y6bpsDHrZVt71FOn5sCHZR/a1xZoV9h+D Upa7mSQqs7fTVBJN5ubkA4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=content-type:to:subject:references:date :mime-version:content-transfer-encoding:from:message-id :in-reply-to; s=smtpout; bh=t5VArlgu1miJca2tgfjDYeAzhB4=; b=MRrc uqcs3K4quef9lxlQXBcH1MpWYfeSvSTAW/8uoHum+Mq8ML7rCY1fTehH/L138Ol+ NgkmFzArRE6iUYfdbSN/tv9HCgFDw+lmBYZ3Sddpfal6gNyH4v0eqEmHVzxwxvDj tD5kjt131Y2dOW2w7BGHFqmlbTD4cgYODCUGpX8= X-Sasl-enc: R61X+sXwttI06Btf1FpF8gN5/uAIt01Z+g6VI17IWNfu 1367581810 Received: from tech304.office.supranet.net (unknown [66.170.8.18]) by mail.messagingengine.com (Postfix) with ESMTPA id EEBB9C8000D for ; Fri, 3 May 2013 07:50:09 -0400 (EDT) Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: NFS Performance issue against NetApp References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> Date: Fri, 03 May 2013 06:50:09 -0500 MIME-Version: 1.0 Content-Transfer-Encoding: Quoted-Printable From: "Mark Felder" Message-ID: In-Reply-To: <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> User-Agent: Opera Mail/12.14 (FreeBSD) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 11:50:12 -0000 On Thu, 02 May 2013 18:43:17 -0500, Marc G. Fournier = wrote: > Hadn't thought to do so with Linux, but =E2=80=A6 > Linux =E2=80=A6=E2=80=A6. 20732ms, 20117ms, 20935ms, 20130ms, 20560ms > FreeBSD .. 28996ms, 24794ms, 24702ms, 23311ms, 24153ms Please make sure both platforms are using similar atime settings. I thin= k = most distros use ext4 with diratime by default. I'd just do noatime on = both platforms to be safe. From owner-freebsd-fs@FreeBSD.ORG Fri May 3 18:16:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E6A1914E for ; Fri, 3 May 2013 18:16:11 +0000 (UTC) (envelope-from mike@bayphoto.com) Received: from mx.got.net (mx6.mx3.got.net [207.111.237.45]) by mx1.freebsd.org (Postfix) with ESMTP id B16681915 for ; Fri, 3 May 2013 18:16:10 +0000 (UTC) Received: from [10.250.12.27] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.got.net (mx3.mx3.got.net) with ESMTP id C5B7223A2B9 for ; Fri, 3 May 2013 10:43:21 -0700 (PDT) Message-ID: <5183F739.2040908@bayphoto.com> Date: Fri, 03 May 2013 10:43:21 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: zfs issue - disappearing data Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080903080802050802010504" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mike@bayphoto.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 18:16:12 -0000 This is a cryptographically signed message in MIME format. --------------ms080903080802050802010504 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable We had a critical issue with a zfs server that exports shares via samba=20 (3.5) last night system info: uname -a FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 zpool history: History for 'data': 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop /dev/gpt/disk26.nop 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop 2013-02-25.17:12:19 zfs set checksum=3Dfletcher4 data 2013-02-25.17:12:22 zfs set compression=3Dlzjb data 2013-02-25.17:12:25 zfs set aclmode=3Dpassthrough data 2013-02-25.17:12:30 zfs set aclinherit=3Dpassthrough data 2013-02-25.17:13:25 zpool export data 2013-02-25.17:15:33 zpool import -d /dev/gpt data 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW 2013-03-27.12:05:42 zfs create data/IMAGEQUIX 2013-03-27.13:32:54 zfs create data/ROES_ORDERS 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE We had a file structure drop off: /data/XML_WORKFLOW/XML_ORDERS/ around 5/2/2012 @ 17:00 In that directory, there were a few thousand directories (containing=20 images and a couple metadata text/xml files) What is odd, is doing a du -h in the parent XML_WORKFLOW directory, only = reports ~150MB: # find . -type f |wc -l 86 # du -sh . 130M . however, df reports 1.5GB: # df -h . Filesystem Size Used Avail Capacity Mounted on data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW zdb -d shows: # zdb -d data/XML_WORKFLOW Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, 212812 objects Digging further into zdb, the path is missing for most of those objects: # zdb -ddddd data/XML_WORKFLOW 635248 Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, 212812 objects, rootbp DVA[0]=3D<5:b274264000:2000> DVA[1]=3D<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=3D800L/200P birth=3D1202311L/1202311P fill=3D212812 cksum=3D16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5= 292b Object lvl iblk dblk dsize lsize %full type 635248 1 16K 512 6.00K 512 100.00 ZFS plain file 168 bonus System attributes dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 0 path ??? uid 11258 gid 10513 atime Thu May 2 17:31:26 2013 mtime Thu May 2 17:31:26 2013 ctime Thu May 2 17:31:26 2013 crtime Thu May 2 17:13:58 2013 gen 1197180 mode 100600 size 52 parent 635247 links 1 pflags 40800000005 Indirect blocks: 0 L0 3:a9da05a000:2000 200L/200P F=3D1 B=3D1197391/1197391 segment [0000000000000000, 0000000000000200) size 512 The application that writes to this volume runs on a windows client, so=20 far, it has exhibited identical behavior across two zfs servers, but not = on a generic windows server 2003 network share. The question is, what is happening to the data. Is it a samba issue? Is=20 it ZFS? I've enabled the samba full_audit module to track file=20 deletions, so I should have more information on that side. If anyone has seen similar behavior please let me know Mike C --------------ms080903080802050802010504 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwNTAzMTc0 MzIxWjAjBgkqhkiG9w0BCQQxFgQUqt0s96DbQmUGKDPc+X/U6iPfK/AwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAKhMtPc7pqaWLROiR//7Nvtcz+hhDkGiLhmy UpUCJBf2ptcDl1/5RoWRMYOUe8NB2ckHefhuZNvWy/XcKbZD6gvukRqkwGH4vy19Vg0+W9KW qM6yME1hW/fYGeb7O5/pIOtdqQ2KqkTTEloqSin3It3ZodQp1DwdM4tn+GyMKxL17Hti7GAs xYCQmcsEh1UrladnsEb/AJqczDzofYQOhhWo2JkgF6ODzhv9VcYsny7ak/bvGpl8cwH8mcQ5 liMbGB+Xsa1kN4btFsUJuSMuHGlZRKVXy1XtIuEgVvuWG+MWi8Fta55rKDVAhZIoKUCYSVAi yAk2TNDp3XrpJ+kxtDFg6a8CKYR7dP8CVM1FrUu2Q8mZw6yuxs6R0mXdxJyFf/rUlwdA6HXO sauWp7yNp5KnKJe/5Z7q8Om4gmWtFaQgw1/3zORDA/K+HnJPu/B7XYXTWLZBU6iSrAUG3WPP W2QV9a3KoShxRrme0bqEXRpgbEO486qdWGOMYC1eLxZE6jWh8Z2qT+Fam8Jxec/fLanYdb6v iCo4uYHR5SVZ4lvkw7dpamG5i5p2WtrOmNIv2T1eexlg0ObptBXqwBqTO8zfipHrEHJ+W0Er HMGgFP4WA62Ao85OsGs9IxTiCTZcuqQhV6vhgyoHQjqwf8raz/bvNStRGN69GP9ctFQSyrfZ AAAAAAAA --------------ms080903080802050802010504-- From owner-freebsd-fs@FreeBSD.ORG Fri May 3 18:36:04 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B47A9DED for ; Fri, 3 May 2013 18:36:04 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta11.emeryville.ca.mail.comcast.net (qmta11.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:44:76:96:27:211]) by mx1.freebsd.org (Postfix) with ESMTP id 9A3151BCC for ; Fri, 3 May 2013 18:36:04 +0000 (UTC) Received: from omta07.emeryville.ca.mail.comcast.net ([76.96.30.59]) by qmta11.emeryville.ca.mail.comcast.net with comcast id XVFj1l0091GXsucABWc3WQ; Fri, 03 May 2013 18:36:03 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta07.emeryville.ca.mail.comcast.net with comcast id XWc31l0041t3BNj8UWc39J; Fri, 03 May 2013 18:36:03 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id E1F3973A33; Fri, 3 May 2013 11:36:02 -0700 (PDT) Date: Fri, 3 May 2013 11:36:02 -0700 From: Jeremy Chadwick To: Mike Carlson Subject: Re: zfs issue - disappearing data Message-ID: <20130503183602.GA46512@icarus.home.lan> References: <5183F739.2040908@bayphoto.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5183F739.2040908@bayphoto.com> User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367606163; bh=SASZnVEpUy8zKMvIhh9NEi7+QDsaP3lfhQxLHsDsDz0=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=CtJK0vAtUVqVt2fFUlwhok2rC+l+IICq8HqmBgsRZch9Aa+moBK+LgQoGt/Q9K6+7 lRrKe4FuD0OMWhrzI0kJOR27RTJmWkiUV2ZamGeJ0/+PqaYfCL0cIkcskS/MxnIekH eOoZEI7zYRF9cJjhJTtKNpB/SiCrQW5WJ7F4v2ho1bymLAqGyPTeCp6snwslxvxqnU JS2/YqnJWZ1n+NTYL6C3wzE+8u1tDCQRYaRMoRP+nTlPger/vr/exDB3Idj5ZRyr+R ztY/+LfCzIzSIvVAVDcGbIFNJhpmu/nAdVfwp4nFRHuS1RkKIC11MWEfH4DRALkuKZ OKTuNMZ46CMAw== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 18:36:04 -0000 On Fri, May 03, 2013 at 10:43:21AM -0700, Mike Carlson wrote: > {snipping parts I have no knowledge of} > > History for 'data': > {snip} > 2013-02-25.17:12:22 zfs set compression=lzjb data > > We had a file structure drop off: > > /data/XML_WORKFLOW/XML_ORDERS/ > > around 5/2/2012 @ 17:00 > > In that directory, there were a few thousand directories (containing > images and a couple metadata text/xml files) > > What is odd, is doing a du -h in the parent XML_WORKFLOW directory, > only reports ~150MB: > > # find . -type f |wc -l > 86 > # du -sh . > 130M . > > > however, df reports 1.5GB: > > # df -h . > Filesystem Size Used Avail Capacity Mounted on > data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW This is one of the side effects of ZFS compression. Google "zfs compression df du freebsd". You'll find lots of chat about this. To be clear: it is not a FreeBSD-specific thing. You may also find the -A flag to du(1) useful. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri May 3 20:29:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 0C433A5E for ; Fri, 3 May 2013 20:29:23 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id CDACA1238 for ; Fri, 3 May 2013 20:29:22 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id DDB161A45917; Fri, 3 May 2013 17:29:15 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 41172-10; Fri, 3 May 2013 20:29:15 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 2B16F1A45916; Fri, 3 May 2013 17:29:15 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: <5183074B.5090004@egr.msu.edu> Date: Fri, 3 May 2013 13:29:13 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> <5183074B.5090004@egr.msu.edu> To: Adam McDougall X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 20:29:23 -0000 On 2013-05-02, at 17:39 , Adam McDougall wrote: > My understanding of jboss is it unpacks your war files (or whatever) = to a temp deploy dir but essentially tries to run everything from = memory. If you replaced a war file, it would usually undeploy and = redeploy. Is your jboss extracting the archives to an NFS dir or can = you reconfigure or symlink it to extract to a local temp dir when = starting up? I can't imagine offhand why it might be useful to store = the temp dir on NFS. I would think most of the writes at startup would = be to temp files that would be of no use after the jboss java process is = stopped. Unless I've missed something, jboss extracts the war when you do the = deploy, so subsequent restarts just use the extracted files and = shouldn't be slowed down by those writes =85 there are no other temp = files that I'm aware of =85 but, in this case, the problem is that we're = running jboss within a jail'd environment, and the jail is sitting on = the NFS server, so moving pieces of it to local drives isn't = particularly feasible =85 From owner-freebsd-fs@FreeBSD.ORG Fri May 3 20:30:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id B851BBD7 for ; Fri, 3 May 2013 20:30:47 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 83AAF1260 for ; Fri, 3 May 2013 20:30:47 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id D6A951A45917; Fri, 3 May 2013 17:30:46 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 45031-02; Fri, 3 May 2013 20:30:46 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id D9BF71A45916; Fri, 3 May 2013 17:30:45 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: Date: Fri, 3 May 2013 13:30:44 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <897E0179-A848-4103-9273-5F7257CFC50A@hub.org> References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> <5183074B.5090004@egr.msu.edu> To: Chuck Burns X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 20:30:47 -0000 On 2013-05-02, at 19:53 , Chuck Burns wrote: >=20 > Here is another possibility. Most linux distros put /tmp on tmpfs, = whereas > FreeBSD by default uses actual disk space. =46rom what I can tell, nothing is actually being written to /tmp =85 = but, I just added mounting /tmp using tmpfs to my /etc/fstab file, *just = in case* =85 made no difference =85 From owner-freebsd-fs@FreeBSD.ORG Fri May 3 20:44:28 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 05281B6A; Fri, 3 May 2013 20:44:28 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id C71F31340; Fri, 3 May 2013 20:44:27 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id 365F41A45917; Fri, 3 May 2013 17:44:27 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 49304-03; Fri, 3 May 2013 20:44:26 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 5B6481A45916; Fri, 3 May 2013 17:44:26 -0300 (ADT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: Initial NFS Test: Linux vs FreeBSD (769% slower) From: "Marc G. Fournier" In-Reply-To: <20130502230817.GA10891@in-addr.com> Date: Fri, 3 May 2013 13:44:24 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <0E35DD34-B822-423E-B2F2-31A237A3D0CB@hub.org> References: <531F8BBE-1476-4591-BABD-EA6B230ADB44@hub.org> <1032981589.58481.1367457568966.JavaMail.root@erie.cs.uoguelph.ca> <20130502230817.GA10891@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.1503) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 20:44:28 -0000 On 2013-05-02, at 16:08 , Gary Palmer wrote: > On Wed, May 01, 2013 at 09:19:28PM -0400, Rick Macklem wrote: >> Well, I looked at the packet capture and, for some reason, it >> repeatedly does a write of 1 byte to a file, followed by a read of >> that file, over and over and ... again. I have no idea why the >> app. does that. >=20 > It might be worth running the Linux Java under FreeBSD Linux emulation > to see if it is the app that is doing that or something in Java that > doesn't like the FreeBSD version for some reason. Good thought =85 same results =85 :( From owner-freebsd-fs@FreeBSD.ORG Fri May 3 21:01:46 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 25709DDE for ; Fri, 3 May 2013 21:01:45 +0000 (UTC) (envelope-from break19@gmail.com) Received: from mail-ve0-x22b.google.com (mail-ve0-x22b.google.com [IPv6:2607:f8b0:400c:c01::22b]) by mx1.freebsd.org (Postfix) with ESMTP id 6E2271469 for ; Fri, 3 May 2013 21:01:45 +0000 (UTC) Received: by mail-ve0-f171.google.com with SMTP id oy12so1893997veb.2 for ; Fri, 03 May 2013 14:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=JbJonByzZkXJ3CADhWRAbUJF/u0slQDAJWPiGJNdMHo=; b=Lix34C5su0yOLlL7uX8Njuj6utVq+KhXfQDzc/TVcJOUXBLdKg7+pTy2TkC3xeblHY UpeQeYmxX9c9ZoQQG/K1TmYnn8Cw7d5I+PywuIc2EGWfL/ptF8HxcC0T58yyBR3cKT77 VAMX5VsVx3tLu44oIEXsN3Eq8gGI8prUBEaGC3K0dltHeAMgFjE0Ljx/Fq3uN95sjoNI zr7qHw1Srv7fFjFg5W11bXshZF8amfUArFYQlW4QPSUfBMT4BtE5XkEHqlzO+T+WQ5aF MTEoCj73BvFSa4EijIeLo42YN3+y7HnWbN/9jKgMn1Ywxfa5npQhri3A+C4M96YwHG6N P43A== MIME-Version: 1.0 X-Received: by 10.52.176.65 with SMTP id cg1mr3592704vdc.1.1367614904954; Fri, 03 May 2013 14:01:44 -0700 (PDT) Received: by 10.58.208.166 with HTTP; Fri, 3 May 2013 14:01:44 -0700 (PDT) In-Reply-To: References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> <5183074B.5090004@egr.msu.edu> Date: Fri, 3 May 2013 16:01:44 -0500 Message-ID: Subject: Re: NFS Performance issue against NetApp From: Chuck Burns To: "Marc G. Fournier" Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 21:01:46 -0000 So, wait.. you're comparing a jail-over-nfs to.. what? linux doesnt have jails, so you aren't really making a fair comparison here. On Fri, May 3, 2013 at 3:29 PM, Marc G. Fournier wrote: > > On 2013-05-02, at 17:39 , Adam McDougall wrote: > > > My understanding of jboss is it unpacks your war files (or whatever) to > a temp deploy dir but essentially tries to run everything from memory. I= f > you replaced a war file, it would usually undeploy and redeploy. Is your > jboss extracting the archives to an NFS dir or can you reconfigure or > symlink it to extract to a local temp dir when starting up? I can't > imagine offhand why it might be useful to store the temp dir on NFS. I > would think most of the writes at startup would be to temp files that wou= ld > be of no use after the jboss java process is stopped. > > Unless I've missed something, jboss extracts the war when you do the > deploy, so subsequent restarts just use the extracted files and shouldn't > be slowed down by those writes =85 there are no other temp files that I'm > aware of =85 but, in this case, the problem is that we're running jboss > within a jail'd environment, and the jail is sitting on the NFS server, s= o > moving pieces of it to local drives isn't particularly feasible =85 > > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri May 3 21:11:39 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 18016844 for ; Fri, 3 May 2013 21:11:39 +0000 (UTC) (envelope-from scrappy@hub.org) Received: from hub.org (hub.org [200.46.208.146]) by mx1.freebsd.org (Postfix) with ESMTP id 872A61518 for ; Fri, 3 May 2013 21:11:38 +0000 (UTC) Received: from maia.hub.org (unknown [200.46.151.189]) by hub.org (Postfix) with ESMTP id C756D1A45917; Fri, 3 May 2013 18:11:37 -0300 (ADT) Received: from hub.org ([200.46.208.146]) by maia.hub.org (mx1.hub.org [200.46.151.189]) (amavisd-maia, port 10024) with ESMTP id 61290-01; Fri, 3 May 2013 21:11:37 +0000 (UTC) Received: from [10.5.250.150] (remote.ilcs.sd63.bc.ca [142.31.148.2]) by hub.org (Postfix) with ESMTPA id 3892C1A45916; Fri, 3 May 2013 18:11:36 -0300 (ADT) Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: NFS Performance issue against NetApp From: "Marc G. Fournier" In-Reply-To: Date: Fri, 3 May 2013 14:11:34 -0700 Message-Id: References: <834305228.13772274.1367527941142.JavaMail.root@k-state.edu> <75CB6F1E-385D-4E51-876E-7BB8D7140263@hub.org> <20130502221857.GJ32659@physics.umn.edu> <420165EE-BBBF-4E97-B476-58FFE55A52AA@hub.org> <5183074B.5090004@egr.msu.edu> To: Chuck Burns X-Mailer: Apple Mail (2.1503) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 21:11:39 -0000 On 2013-05-03, at 14:01 , Chuck Burns wrote: > So, wait.. you're comparing a jail-over-nfs to.. what? linux doesnt = have jails, so you aren't really making a fair comparison here. Sorry, shoulnt' have mentioned jails =85 that is the end goal, but I am = not using it for this testing =85 beyond the OS, I have these as close = to exactly the same as I can get it =85 Linux: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server03-lv_root 51606140 1467992 47516708 3% / tmpfs 8145884 0 8145884 0% /dev/shm /dev/sda1 495844 52897 417347 12% /boot /dev/mapper/vg_server03-lv_home 228138292 191696 216357784 1% /home 192.168.1.1:/vol/linux_jboss 31876736 328256 31548480 2% = /usr/local/jboss-as-7.1.1.Final FreeBSD: Filesystem 1K-blocks Used Avail Capacity = Mounted on /dev/da0p2 279300632 19730076 237226508 8% / devfs 1 1 0 100% = /dev 192.168.1.1:/vol/freebsd_jboss 31876712 3570808 28305904 11% = /usr/local/jboss-as-7.1.1.Final tmpfs 19282976 4 19282972 0% = /tmp The only thing running off of NFS is the jboss directory, and both NFS = shares are the same size (32G) =85 I even try and avoid running tests = against each at the same time, so that neither are competing for network = or netapp resources =85 not "real world", but I am aiming to minimize = any external influences where possible ... >=20 >=20 > On Fri, May 3, 2013 at 3:29 PM, Marc G. Fournier = wrote: >=20 > On 2013-05-02, at 17:39 , Adam McDougall wrote: >=20 > > My understanding of jboss is it unpacks your war files (or whatever) = to a temp deploy dir but essentially tries to run everything from = memory. If you replaced a war file, it would usually undeploy and = redeploy. Is your jboss extracting the archives to an NFS dir or can = you reconfigure or symlink it to extract to a local temp dir when = starting up? I can't imagine offhand why it might be useful to store = the temp dir on NFS. I would think most of the writes at startup would = be to temp files that would be of no use after the jboss java process is = stopped. >=20 > Unless I've missed something, jboss extracts the war when you do the = deploy, so subsequent restarts just use the extracted files and = shouldn't be slowed down by those writes =85 there are no other temp = files that I'm aware of =85 but, in this case, the problem is that we're = running jboss within a jail'd environment, and the jail is sitting on = the NFS server, so moving pieces of it to local drives isn't = particularly feasible =85 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20 From owner-freebsd-fs@FreeBSD.ORG Fri May 3 22:50:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B1C0DB88 for ; Fri, 3 May 2013 22:50:10 +0000 (UTC) (envelope-from nowakpl@platinum.linux.pl) Received: from platinum.linux.pl (platinum.edu.pl [81.161.192.4]) by mx1.freebsd.org (Postfix) with ESMTP id 59CDA193A for ; Fri, 3 May 2013 22:50:09 +0000 (UTC) Received: by platinum.linux.pl (Postfix, from userid 87) id 5D79E47E1A; Sat, 4 May 2013 00:41:24 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on platinum.linux.pl X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.3.2 Received: from [10.255.0.2] (unknown [83.151.38.73]) by platinum.linux.pl (Postfix) with ESMTPA id A5C9747E0F; Sat, 4 May 2013 00:41:22 +0200 (CEST) Message-ID: <51843D0E.2020907@platinum.linux.pl> Date: Sat, 04 May 2013 00:41:18 +0200 From: Adam Nowacki User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: mike@bayphoto.com Subject: Re: zfs issue - disappearing data References: <5183F739.2040908@bayphoto.com> In-Reply-To: <5183F739.2040908@bayphoto.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 May 2013 22:50:10 -0000 Looks like we have a leak with extended attributes: # zfs create -o mountpoint=/test root/test # touch /test/file1 # setextattr user test abc /test/file1 # zdb root/test Object lvl iblk dblk dsize lsize %full type 8 1 16K 512 0 512 0.00 ZFS plain file 9 1 16K 512 1K 512 100.00 ZFS directory 10 1 16K 512 512 512 100.00 ZFS plain file object 8 - the file, object 9 - extended attributes directory, object 10 - value of the 'test' extended attribute # rm /test/file1 # zdb root/test Object lvl iblk dblk dsize lsize %full type 10 1 16K 512 512 512 100.00 ZFS plain file objects 8 and 9 are deleted, object 10 is still there (leaked). On 2013-05-03 19:43, Mike Carlson wrote: > We had a critical issue with a zfs server that exports shares via samba > (3.5) last night > > system info: > uname -a > > FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD 9.1-RELEASE > #0 r243825: Tue Dec 4 09:23:10 UTC 2012 > root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 > > zpool history: > > History for 'data': > 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop > /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop > 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop > /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop > 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop > /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop > 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop > /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop > 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop > /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop > 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop > /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop > 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop > /dev/gpt/disk26.nop > 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop > 2013-02-25.17:12:19 zfs set checksum=fletcher4 data > 2013-02-25.17:12:22 zfs set compression=lzjb data > 2013-02-25.17:12:25 zfs set aclmode=passthrough data > 2013-02-25.17:12:30 zfs set aclinherit=passthrough data > 2013-02-25.17:13:25 zpool export data > 2013-02-25.17:15:33 zpool import -d /dev/gpt data > 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop > 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW > 2013-03-27.12:05:42 zfs create data/IMAGEQUIX > 2013-03-27.13:32:54 zfs create data/ROES_ORDERS > 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES > 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES > 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE > > We had a file structure drop off: > > /data/XML_WORKFLOW/XML_ORDERS/ > > around 5/2/2012 @ 17:00 > > In that directory, there were a few thousand directories (containing > images and a couple metadata text/xml files) > > What is odd, is doing a du -h in the parent XML_WORKFLOW directory, only > reports ~150MB: > > # find . -type f |wc -l > 86 > # du -sh . > 130M . > > > however, df reports 1.5GB: > > # df -h . > Filesystem Size Used Avail Capacity Mounted on > data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW > > zdb -d shows: > > # zdb -d data/XML_WORKFLOW > Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, > 212812 objects > > Digging further into zdb, the path is missing for most of those objects: > > # zdb -ddddd data/XML_WORKFLOW 635248 > Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, > 212812 objects, rootbp DVA[0]=<5:b274264000:2000> > DVA[1]=<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE > contiguous unique double size=800L/200P birth=1202311L/1202311P > fill=212812 cksum=16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5292b > > Object lvl iblk dblk dsize lsize %full type > 635248 1 16K 512 6.00K 512 100.00 ZFS plain file > 168 bonus System attributes > dnode flags: USED_BYTES USERUSED_ACCOUNTED > dnode maxblkid: 0 > path ??? > uid 11258 > gid 10513 > atime Thu May 2 17:31:26 2013 > mtime Thu May 2 17:31:26 2013 > ctime Thu May 2 17:31:26 2013 > crtime Thu May 2 17:13:58 2013 > gen 1197180 > mode 100600 > size 52 > parent 635247 > links 1 > pflags 40800000005 > Indirect blocks: > 0 L0 3:a9da05a000:2000 200L/200P F=1 B=1197391/1197391 > > segment [0000000000000000, 0000000000000200) size 512 > > The application that writes to this volume runs on a windows client, so > far, it has exhibited identical behavior across two zfs servers, but not > on a generic windows server 2003 network share. > > The question is, what is happening to the data. Is it a samba issue? Is > it ZFS? I've enabled the samba full_audit module to track file > deletions, so I should have more information on that side. > > If anyone has seen similar behavior please let me know > > Mike C From owner-freebsd-fs@FreeBSD.ORG Sat May 4 00:03:34 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 06117A85 for ; Sat, 4 May 2013 00:03:34 +0000 (UTC) (envelope-from mike@bayphoto.com) Received: from mx.got.net (mx6.mx3.got.net [207.111.237.45]) by mx1.freebsd.org (Postfix) with ESMTP id E47111C45 for ; Sat, 4 May 2013 00:03:33 +0000 (UTC) Received: from [10.250.12.27] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.got.net (mx3.mx3.got.net) with ESMTP id 8699523A2BA; Fri, 3 May 2013 17:03:32 -0700 (PDT) Message-ID: <51845054.3020302@bayphoto.com> Date: Fri, 03 May 2013 17:03:32 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Adam Nowacki Subject: Re: zfs issue - disappearing data References: <5183F739.2040908@bayphoto.com> <51843D0E.2020907@platinum.linux.pl> In-Reply-To: <51843D0E.2020907@platinum.linux.pl> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms050303070303060602090004" Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mike@bayphoto.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 00:03:34 -0000 This is a cryptographically signed message in MIME format. --------------ms050303070303060602090004 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Interesting. Is that why zdb shows so many objects? Is this a configuration mistake, and would it lead to data loss? Can I provide any addition information ? Mike C On 5/3/2013 3:41 PM, Adam Nowacki wrote: > Looks like we have a leak with extended attributes: > > # zfs create -o mountpoint=3D/test root/test > # touch /test/file1 > # setextattr user test abc /test/file1 > # zdb root/test > Object lvl iblk dblk dsize lsize %full type > 8 1 16K 512 0 512 0.00 ZFS plain file > 9 1 16K 512 1K 512 100.00 ZFS directory > 10 1 16K 512 512 512 100.00 ZFS plain file > > object 8 - the file, > object 9 - extended attributes directory, > object 10 - value of the 'test' extended attribute > > # rm /test/file1 > # zdb root/test > > Object lvl iblk dblk dsize lsize %full type > 10 1 16K 512 512 512 100.00 ZFS plain file > > objects 8 and 9 are deleted, object 10 is still there (leaked). > > On 2013-05-03 19:43, Mike Carlson wrote: >> We had a critical issue with a zfs server that exports shares via samb= a >> (3.5) last night >> >> system info: >> uname -a >> >> FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD 9.1-RELEA= SE >> #0 r243825: Tue Dec 4 09:23:10 UTC 2012 >> root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 >> >> zpool history: >> >> History for 'data': >> 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop >> /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop >> 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop >> /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop >> 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop >> /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop >> 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop >> /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop >> 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop >> /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop >> 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop >> /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop >> 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop >> /dev/gpt/disk26.nop >> 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop >> 2013-02-25.17:12:19 zfs set checksum=3Dfletcher4 data >> 2013-02-25.17:12:22 zfs set compression=3Dlzjb data >> 2013-02-25.17:12:25 zfs set aclmode=3Dpassthrough data >> 2013-02-25.17:12:30 zfs set aclinherit=3Dpassthrough data >> 2013-02-25.17:13:25 zpool export data >> 2013-02-25.17:15:33 zpool import -d /dev/gpt data >> 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop >> 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW >> 2013-03-27.12:05:42 zfs create data/IMAGEQUIX >> 2013-03-27.13:32:54 zfs create data/ROES_ORDERS >> 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES >> 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES >> 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE >> >> We had a file structure drop off: >> >> /data/XML_WORKFLOW/XML_ORDERS/ >> >> around 5/2/2012 @ 17:00 >> >> In that directory, there were a few thousand directories (containing >> images and a couple metadata text/xml files) >> >> What is odd, is doing a du -h in the parent XML_WORKFLOW directory, on= ly >> reports ~150MB: >> >> # find . -type f |wc -l >> 86 >> # du -sh . >> 130M . >> >> >> however, df reports 1.5GB: >> >> # df -h . >> Filesystem Size Used Avail Capacity Mounted on >> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW >> >> zdb -d shows: >> >> # zdb -d data/XML_WORKFLOW >> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, >> 212812 objects >> >> Digging further into zdb, the path is missing for most of those object= s: >> >> # zdb -ddddd data/XML_WORKFLOW 635248 >> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, >> 212812 objects, rootbp DVA[0]=3D<5:b274264000:2000> >> DVA[1]=3D<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE >> contiguous unique double size=3D800L/200P birth=3D1202311L/1202311= P >> fill=3D212812=20 >> cksum=3D16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5292b >> >> Object lvl iblk dblk dsize lsize %full type >> 635248 1 16K 512 6.00K 512 100.00 ZFS plain file >> 168 bonus System attributes >> dnode flags: USED_BYTES USERUSED_ACCOUNTED >> dnode maxblkid: 0 >> path ??? >> uid 11258 >> gid 10513 >> atime Thu May 2 17:31:26 2013 >> mtime Thu May 2 17:31:26 2013 >> ctime Thu May 2 17:31:26 2013 >> crtime Thu May 2 17:13:58 2013 >> gen 1197180 >> mode 100600 >> size 52 >> parent 635247 >> links 1 >> pflags 40800000005 >> Indirect blocks: >> 0 L0 3:a9da05a000:2000 200L/200P F=3D1 B=3D1197391/1197391 >> >> segment [0000000000000000, 0000000000000200) size 512 >> >> The application that writes to this volume runs on a windows client, s= o >> far, it has exhibited identical behavior across two zfs servers, but n= ot >> on a generic windows server 2003 network share. >> >> The question is, what is happening to the data. Is it a samba issue? I= s >> it ZFS? I've enabled the samba full_audit module to track file >> deletions, so I should have more information on that side. >> >> If anyone has seen similar behavior please let me know >> >> Mike C > --------------ms050303070303060602090004 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwNTA0MDAw MzMyWjAjBgkqhkiG9w0BCQQxFgQUt3X2CKPiwt17MQ8ymreizAOzwwowbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICACpfUGNUSIms8C0LqNDSLzkDFDrT4cgrnXuC WDHT+Sem3Erbux17eGoWTqtb+oUfuYFah5FsYSIF66Y/d6Wwot8g36WjWYHtN6daOHG2N4tY SsGAxkQ4oeCbhav0bVg2WglInnl4pXKw3Av7CHsEoyqzmccb/+2CR5zA0AgkRc5EPRyI9Ksd COOlQhFTDRz/35WPIBNdegqcGQreeyutcfwCPddrlxJRyHAgWXMsb0i+/FjyPU9YPvCLZpt2 AAeIOAkhsXZnrYa0et/MzDWvccRjGo03T9MaJm3GS+QV2EahEOvlRwBoY3amskYUyejkKu+6 dsRUJb4JwzT5NR25ViiSCO9ZTP+gLagPFbNf0E9sTmePkqD6linrwOoyiSLW3B3DS416K3ce eHTGmsWC8XWjzqqWn2vklEAsJ4ht5+D44viYmV4P/tHbe3X4DZPlt5X3Ogx58atp7I/DgkpR ZPwsrxApcswy/ZwwU4hWaV7UEqUOsAwAvfobkZxASiZSP7km/1XzbJpbxE9jmlt7ZHGaFI1O 58Ra2x9HUFUgob/FFlDBOwOAZDdYAi6F4/tAVqvDj63U/Hsh5RzjWOhlt5/S91ksLz0wFieA XwH7rWyQIbbfrPR4QRXnHROO6cQbZBPzrkSsq/Psv8CzAJlccQr7OACEFm7+fPetX5goIhkW AAAAAAAA --------------ms050303070303060602090004-- From owner-freebsd-fs@FreeBSD.ORG Sat May 4 13:34:44 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 962926D1 for ; Sat, 4 May 2013 13:34:44 +0000 (UTC) (envelope-from girgen@FreeBSD.org) Received: from melon.pingpong.net (melon.pingpong.net [79.136.116.200]) by mx1.freebsd.org (Postfix) with ESMTP id D5E501B8D for ; Sat, 4 May 2013 13:34:43 +0000 (UTC) Received: from girgBook.local (c-ce57e155.1525-1-64736c12.cust.bredbandsbolaget.se [85.225.87.206]) (using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by melon.pingpong.net (Postfix) with ESMTPSA id C1BE416977; Sat, 4 May 2013 15:34:35 +0200 (CEST) Message-ID: <51850E69.5080508@FreeBSD.org> Date: Sat, 04 May 2013 15:34:33 +0200 From: Palle Girgensohn User-Agent: Postbox 3.0.8 (Macintosh/20130427) MIME-Version: 1.0 To: Kirk McKusick Subject: Re: leaking lots of unreferenced inodes (pg_xlog files?), maybe after moving tables and indexes to tablespace on different volume References: <201303160401.r2G41Um7026132@chez.mckusick.com> In-Reply-To: <201303160401.r2G41Um7026132@chez.mckusick.com> X-Enigmail-Version: 1.2.3 Content-Type: multipart/mixed; boundary="------------000608090702060709000701" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@FreeBSD.org, Jeff Roberson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 13:34:44 -0000 This is a multi-part message in MIME format. --------------000608090702060709000701 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, Just a quick ping on this issue, it is still happening and we are slowly filling up the disk again. Is seems like we will have to plan a remount within a month given the current graphs. I will be back with info once we have remounted it, running you suggested scripts before and after. Is there anything else I can do to get more debug information? Regards, Palle Kirk McKusick skrev: > I don't know how, but somehow something is holding references to the > removed files causing them to fail to be reclaimed. > > Could you run your system for a while to build up a new set of these > files, then run a script with the `df -ih' as before. Then run > `vmstat -m', `sysctl debug', and fstat -f /usr' both before and after > doing the umount/mount. Hopefully that will give us some more clues > as to what is happening. > > And Jeff, if you have any ideas do speak up :-) > > Kirk McKusick -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJRhQ5pAAoJEIhV+7FrxBJDAPkIAKAVlfDPgOYXY6QOrsRWMEct k1OXr+dKDZustixph4O2XQIKuklsRybbcddHY7wD3xj3SWj7O53I5vwNTJawOvV9 N9E3ZGy6JziAPWhy+zBR7HDVa/vEGRI/9NmLVC4wiJ/Wk5pLHaQALJXk9OntL5hU Rc3qtddYuuxE9DppDVC9VWiL1pfKz8BS2WNNQ2xkxhp19EQi7gq4lEpfvcVeBjgJ zE4Y66qGJ0hFCpULSnqce15CaaX+k2Yrr3n59ZNcUXkwH8qd4EZSNlfkABB+7ufO bXgYLMB1bnmDju1NcvcYGwyaugeZz1GP5uPB9daafxJqqegs3TXWgaVAxEVYJLY= =9l9E -----END PGP SIGNATURE----- --------------000608090702060709000701-- From owner-freebsd-fs@FreeBSD.ORG Sat May 4 18:58:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id DBEB6342 for ; Sat, 4 May 2013 18:58:43 +0000 (UTC) (envelope-from mike@bayphoto.com) Received: from mx.got.net (mx5.mx3.got.net [207.111.237.44]) by mx1.freebsd.org (Postfix) with ESMTP id C17EAABB for ; Sat, 4 May 2013 18:58:43 +0000 (UTC) Received: from [10.250.12.27] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.got.net (mx1.mx3.got.net) with ESMTP id 56DF613F34; Fri, 3 May 2013 11:44:08 -0700 (PDT) Message-ID: <51840578.10103@bayphoto.com> Date: Fri, 03 May 2013 11:44:08 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Jeremy Chadwick Subject: Re: zfs issue - disappearing data References: <5183F739.2040908@bayphoto.com> <20130503183602.GA46512@icarus.home.lan> In-Reply-To: <20130503183602.GA46512@icarus.home.lan> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms080108080803060006050401" X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mike@bayphoto.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 18:58:43 -0000 This is a cryptographically signed message in MIME format. --------------ms080108080803060006050401 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable On 5/3/2013 11:36 AM, Jeremy Chadwick wrote: > On Fri, May 03, 2013 at 10:43:21AM -0700, Mike Carlson wrote: >> {snipping parts I have no knowledge of} >> >> History for 'data': >> {snip} >> 2013-02-25.17:12:22 zfs set compression=3Dlzjb data >> >> We had a file structure drop off: >> >> /data/XML_WORKFLOW/XML_ORDERS/ >> >> around 5/2/2012 @ 17:00 >> >> In that directory, there were a few thousand directories (containing >> images and a couple metadata text/xml files) >> >> What is odd, is doing a du -h in the parent XML_WORKFLOW directory, >> only reports ~150MB: >> >> # find . -type f |wc -l >> 86 >> # du -sh . >> 130M . >> >> >> however, df reports 1.5GB: >> >> # df -h . >> Filesystem Size Used Avail Capacity Mounted on >> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW > This is one of the side effects of ZFS compression. Google "zfs > compression df du freebsd". You'll find lots of chat about this. To b= e > clear: it is not a FreeBSD-specific thing. > > You may also find the -A flag to du(1) useful. > Hey Jeremy, thanks for the reply! I thought of that, and the discrepency is too much. du -Ah : # du -Ah 16k ./XML_PRINTABLE/MC0012404 1.5k ./XML_PRINTABLE/7172142/thumbnails 14k ./XML_PRINTABLE/7172142 13k ./XML_PRINTABLE/MC0012410 11k ./XML_PRINTABLE/MC0012403 13k ./XML_PRINTABLE/MC0012409 2.5k ./XML_PRINTABLE/7172141/thumbnails 15k ./XML_PRINTABLE/7172141 20k ./XML_PRINTABLE/MC0012407 12k ./XML_PRINTABLE/MC0012408 512B ./XML_PRINTABLE/7172144/thumbnails 6.5k ./XML_PRINTABLE/7172144 4.0k ./XML_PRINTABLE/INK0000281/thumbnails 17k ./XML_PRINTABLE/INK0000281 512B ./XML_PRINTABLE/7172143/thumbnails 74k ./XML_PRINTABLE/7172143 12k ./XML_PRINTABLE/MC0012405 13k ./XML_PRINTABLE/MC0012406 239k ./XML_PRINTABLE 512B ./XML_CMD 512B ./XML_REPORTS 512B ./XML_ORDERS_TEST 512B ./XML_ORDERS/MC0012405 512B ./XML_ORDERS/MC0012408 512B ./XML_ORDERS/MC0012402 512B ./XML_ORDERS/MC0012406 512B ./XML_ORDERS/MC0012410 512B ./XML_ORDERS/MC0012403 512B ./XML_ORDERS/MC0012409 512B ./XML_ORDERS/MC0012404 272k ./XML_ORDERS/7172141 512B ./XML_ORDERS/MC0012407 454k ./XML_ORDERS/7172142 134M ./XML_ORDERS 512B ./XML_INCOMING 512B ./XML_PRINTABLE_TEST 5.0k ./XML_JOBS/7172142/Preview 6.0k ./XML_JOBS/7172142 26k ./XML_JOBS/INK0000281/Preview 27k ./XML_JOBS/INK0000281 40k ./XML_JOBS/MC0012410/Preview 41k ./XML_JOBS/MC0012410 39k ./XML_JOBS/MC0012409/Preview 40k ./XML_JOBS/MC0012409 116k ./XML_JOBS 135M . # zfs get compressratio data/XML_WORKFLOW NAME PROPERTY VALUE SOURCE data/XML_WORKFLOW compressratio 1.04x - If I didn't know better, and had not already tried to unmount/mount the=20 zvol in question, I would swear this looked liked something mounting=20 over the missing directories, similar to what can happen with a nfs=20 mount that "disappears" on a client when there is a local directory of=20 the same name. --------------ms080108080803060006050401 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwNTAzMTg0 NDA4WjAjBgkqhkiG9w0BCQQxFgQUb2X+gdHyx0evCgM3N5cTCiNPBWIwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAF7apK7hJXZPd1nXbao49DXsLhV72rOUsTlB CFIkXdxN+iRZw1FfgugN7fkp1bHI+EMt4MKFShl31NztOXtgjS3Q42JSBt7JFLAI4Tkf/KFo cEjCM6qrSsC/3tm7HOFtruP5a5XtIy6OAMC0gl+3tP877zb5mhhKISoXj3bWYYNVFRMWC+Fd aMy6Wvs3ImYf8SL0333lT3HIGtMtYSZdXsdvnfphLwcmVcysSRZlQddKQHmjivl3f4WN5FhL AmCTSeR6NpoTVbSTTkjk5ehTjJgfUrzY/YNrWTzaAKvW9UAQCkNtjROphanAclR06ZPnumHU ktIi3wr3BvaChoQeTxq3ZodNHiv0ORaORMrnlDUd/StPbipAASzJVaX3k8ApEiaTrlb8O1k7 80Yb0arSx/z6jTyPhyGUon55mU9uQgbvcN/H/q4Dwiekgb85rLg5DwCQszghx4zo1OMGCs9S FhSRhW0HnNIdS9q+Y1WgrgOtJ4/eEHFbRWN8LxprkiVkMLtfzCLM9qYKIzPjth9coAtv1fQG mUsFLyAltVH/8lUqhxy0BOae8S/GBlHSScw2nrULTpkWrayfuLLQFRo2EfWrAFyQyukPC0jM CgaHaJhPZ+slgInME8vEOLW7U7DWPLrObdhwSv/hX/M3P+TVbg6AVAn/ACik+BGTtnbxOI5n AAAAAAAA --------------ms080108080803060006050401-- From owner-freebsd-fs@FreeBSD.ORG Sat May 4 19:08:52 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 28CF46F4 for ; Sat, 4 May 2013 19:08:52 +0000 (UTC) (envelope-from feenberg@nber.org) Received: from mail2.nber.org (mail2.nber.org [66.251.72.79]) by mx1.freebsd.org (Postfix) with ESMTP id DDFFBAFC for ; Sat, 4 May 2013 19:08:51 +0000 (UTC) Received: from sas1.nber.org (sas1.nber.org [66.251.72.185]) by mail2.nber.org (8.14.4/8.14.4) with ESMTP id r44J8h6p085935 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Sat, 4 May 2013 15:08:44 -0400 (EDT) (envelope-from feenberg@nber.org) Date: Sat, 4 May 2013 15:08:43 -0400 (EDT) From: Daniel Feenberg To: freebsd-fs@freebsd.org Subject: Restarting exports disturbs NFS clients Message-ID: User-Agent: Alpine 2.03 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Anti-Virus: Kaspersky Anti-Virus for Linux Mail Server 5.6.39/RELEASE, bases: 20130504 #9881875, check: 20130504 clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 19:08:52 -0000 When we change the exportfs file on our FreeBSD 9.1 fileserver and signal mountd to reread the file: kill -HUP `cat /var/run/mountd.pid` it kills the jobs on clients that have files open on the fileserver. They terminate with an I/O error. The same thing happens if NFS is restarted. This is pretty inconvenient for users (and us). Is there a way around this? We have noticed that a Linux fileserver can restart nfs without distrubing clients (other than a short pause). The Linux restart doesn't restart the locking mechanism - is that the difference? We could do without locks, even without NFSv4, for that matter, if it would let us change exports without disturbing users. Perhaps there is an NFS shutdown procedure that we should be using? Daniel Feenberg NBER From owner-freebsd-fs@FreeBSD.ORG Sat May 4 19:13:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 8790A7A6 for ; Sat, 4 May 2013 19:13:26 +0000 (UTC) (envelope-from jdc@koitsu.org) Received: from qmta03.emeryville.ca.mail.comcast.net (qmta03.emeryville.ca.mail.comcast.net [IPv6:2001:558:fe2d:43:76:96:30:32]) by mx1.freebsd.org (Postfix) with ESMTP id 6D890B1F for ; Sat, 4 May 2013 19:13:26 +0000 (UTC) Received: from omta20.emeryville.ca.mail.comcast.net ([76.96.30.87]) by qmta03.emeryville.ca.mail.comcast.net with comcast id Xuhh1l0071smiN4A3vDQU1; Sat, 04 May 2013 19:13:24 +0000 Received: from koitsu.strangled.net ([67.180.84.87]) by omta20.emeryville.ca.mail.comcast.net with comcast id XvDP1l00e1t3BNj8gvDQis; Sat, 04 May 2013 19:13:24 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id B2B4373A33; Sat, 4 May 2013 12:13:23 -0700 (PDT) Date: Sat, 4 May 2013 12:13:23 -0700 From: Jeremy Chadwick To: Daniel Feenberg Subject: Re: Restarting exports disturbs NFS clients Message-ID: <20130504191323.GA71065@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20121106; t=1367694804; bh=9l3eInATn0Xz42InHbnEXAGyRy4UKNU/IhpqVD+fUu0=; h=Received:Received:Received:Date:From:To:Subject:Message-ID: MIME-Version:Content-Type; b=KtHDZ2mxa3EQewstXC2enS8l4FPCjZn7tCc1t2K0Zsr/w473AksCfMOwXv1x4eEWM +HweszJsma1wUYQdiAS25HeB2FcwbxxtRCozinuG/c47H+mMmmwnL2fY9UiT0sfkfR 9GsLqf7Jw8S0iEGUcvYbwvbA72C8RJOmFD2UdCJHQrQJg6CiZVkRip/O+fPJzNz+rR 5kZv8x04+fct+Ill13a+bR3aZ+iXbD7lPTitEyuD/VIXnnjHDPwuZshHv8gzzdZxDz M/eAZxCX5v9wVvvOJHaKgHYlRyIv5zm5REzwz6mUKTBTU07z4fKl0/2jjmBkgko9GP /PIY+An9KE1Ug== Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 19:13:26 -0000 On Sat, May 04, 2013 at 03:08:43PM -0400, Daniel Feenberg wrote: > > When we change the exportfs file on our FreeBSD 9.1 fileserver and signal > mountd to reread the file: > > kill -HUP `cat /var/run/mountd.pid` > > it kills the jobs on clients that have files open on the fileserver. They > terminate with an I/O error. The same thing happens if NFS is restarted. > > This is pretty inconvenient for users (and us). Is there a way > around this? We have noticed that a Linux fileserver can restart nfs > without distrubing clients (other than a short pause). The Linux > restart doesn't restart the locking mechanism - is that the > difference? We could do without locks, even without NFSv4, for that > matter, if it would let us change exports without disturbing users. > Perhaps there is an NFS shutdown procedure that we should be using? http://svnweb.freebsd.org/base/stable/9/usr.sbin/mountd/mountd.c?view=log See commit r243739. TL;DR -- Try running stable/9 instead of 9.1-RELEASE. -- | Jeremy Chadwick jdc@koitsu.org | | UNIX Systems Administrator http://jdc.koitsu.org/ | | Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Sat May 4 19:15:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 5401382F for ; Sat, 4 May 2013 19:15:18 +0000 (UTC) (envelope-from mike@bayphoto.com) Received: from mx.got.net (mx5.mx3.got.net [207.111.237.44]) by mx1.freebsd.org (Postfix) with ESMTP id 3D9B7B35 for ; Sat, 4 May 2013 19:15:17 +0000 (UTC) Received: from [10.250.12.27] (unknown [207.111.246.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.got.net (mx1.mx3.got.net) with ESMTP id 6D36213F3A for ; Fri, 3 May 2013 11:54:36 -0700 (PDT) Message-ID: <518407EC.9090503@bayphoto.com> Date: Fri, 03 May 2013 11:54:36 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs issue - disappearing data References: <5183F739.2040908@bayphoto.com> <20130503183602.GA46512@icarus.home.lan> In-Reply-To: <20130503183602.GA46512@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mike@bayphoto.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 19:15:18 -0000 On 5/3/2013 11:36 AM, Jeremy Chadwick wrote: > On Fri, May 03, 2013 at 10:43:21AM -0700, Mike Carlson wrote: >> {snipping parts I have no knowledge of} >> >> History for 'data': >> {snip} >> 2013-02-25.17:12:22 zfs set compression=lzjb data >> >> We had a file structure drop off: >> >> /data/XML_WORKFLOW/XML_ORDERS/ >> >> around 5/2/2012 @ 17:00 >> >> In that directory, there were a few thousand directories (containing >> images and a couple metadata text/xml files) >> >> What is odd, is doing a du -h in the parent XML_WORKFLOW directory, >> only reports ~150MB: >> >> # find . -type f |wc -l >> 86 >> # du -sh . >> 130M . >> >> >> however, df reports 1.5GB: >> >> # df -h . >> Filesystem Size Used Avail Capacity Mounted on >> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW > This is one of the side effects of ZFS compression. Google "zfs > compression df du freebsd". You'll find lots of chat about this. To be > clear: it is not a FreeBSD-specific thing. > > You may also find the -A flag to du(1) useful. > Hey Jeremy, thanks for the reply! I thought of that, and the discrepency is too much. du -Ah : # du -Ah 16k ./XML_PRINTABLE/MC0012404 1.5k ./XML_PRINTABLE/7172142/thumbnails 14k ./XML_PRINTABLE/7172142 13k ./XML_PRINTABLE/MC0012410 11k ./XML_PRINTABLE/MC0012403 13k ./XML_PRINTABLE/MC0012409 2.5k ./XML_PRINTABLE/7172141/thumbnails 15k ./XML_PRINTABLE/7172141 20k ./XML_PRINTABLE/MC0012407 12k ./XML_PRINTABLE/MC0012408 512B ./XML_PRINTABLE/7172144/thumbnails 6.5k ./XML_PRINTABLE/7172144 4.0k ./XML_PRINTABLE/INK0000281/thumbnails 17k ./XML_PRINTABLE/INK0000281 512B ./XML_PRINTABLE/7172143/thumbnails 74k ./XML_PRINTABLE/7172143 12k ./XML_PRINTABLE/MC0012405 13k ./XML_PRINTABLE/MC0012406 239k ./XML_PRINTABLE 512B ./XML_CMD 512B ./XML_REPORTS 512B ./XML_ORDERS_TEST 512B ./XML_ORDERS/MC0012405 512B ./XML_ORDERS/MC0012408 512B ./XML_ORDERS/MC0012402 512B ./XML_ORDERS/MC0012406 512B ./XML_ORDERS/MC0012410 512B ./XML_ORDERS/MC0012403 512B ./XML_ORDERS/MC0012409 512B ./XML_ORDERS/MC0012404 272k ./XML_ORDERS/7172141 512B ./XML_ORDERS/MC0012407 454k ./XML_ORDERS/7172142 134M ./XML_ORDERS 512B ./XML_INCOMING 512B ./XML_PRINTABLE_TEST 5.0k ./XML_JOBS/7172142/Preview 6.0k ./XML_JOBS/7172142 26k ./XML_JOBS/INK0000281/Preview 27k ./XML_JOBS/INK0000281 40k ./XML_JOBS/MC0012410/Preview 41k ./XML_JOBS/MC0012410 39k ./XML_JOBS/MC0012409/Preview 40k ./XML_JOBS/MC0012409 116k ./XML_JOBS 135M . # zfs get compressratio data/XML_WORKFLOW NAME PROPERTY VALUE SOURCE data/XML_WORKFLOW compressratio 1.04x - If I didn't know better, and had not already tried to unmount/mount the zvol in question, I would swear this looked liked something mounting over the missing directories, similar to what can happen with a nfs mount that "disappears" on a client when there is a local directory of the same name. From owner-freebsd-fs@FreeBSD.ORG Sat May 4 21:23:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7E7CEFCA for ; Sat, 4 May 2013 21:23:44 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 49135FE4 for ; Sat, 4 May 2013 21:23:43 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEANp7hVGDaFvO/2dsb2JhbABQgz6DPLwHgRN0gh8BAQEEAQEBICsgCxsOCgICDRkCKQEJJgYIBwQBGAQEh2sMsB6QJ4EkjFt+NAeCQIETA5RsgkKBJpAOgykgMoEENQ X-IronPort-AV: E=Sophos;i="4.87,613,1363147200"; d="scan'208";a="26729089" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 04 May 2013 17:23:37 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 2C076B4081; Sat, 4 May 2013 17:23:37 -0400 (EDT) Date: Sat, 4 May 2013 17:23:37 -0400 (EDT) From: Rick Macklem To: Jeremy Chadwick Message-ID: <942531517.122341.1367702617117.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20130504191323.GA71065@icarus.home.lan> Subject: Re: Restarting exports disturbs NFS clients MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 21:23:44 -0000 Jeremy Chadwick wrote: > On Sat, May 04, 2013 at 03:08:43PM -0400, Daniel Feenberg wrote: > > > > When we change the exportfs file on our FreeBSD 9.1 fileserver and > > signal > > mountd to reread the file: > > > > kill -HUP `cat /var/run/mountd.pid` > > > > it kills the jobs on clients that have files open on the fileserver. > > They > > terminate with an I/O error. The same thing happens if NFS is > > restarted. > > > > This is pretty inconvenient for users (and us). Is there a way > > around this? We have noticed that a Linux fileserver can restart nfs > > without distrubing clients (other than a short pause). The Linux > > restart doesn't restart the locking mechanism - is that the > > difference? We could do without locks, even without NFSv4, for that > > matter, if it would let us change exports without disturbing users. > > Perhaps there is an NFS shutdown procedure that we should be using? > > http://svnweb.freebsd.org/base/stable/9/usr.sbin/mountd/mountd.c?view=log > > See commit r243739. > > TL;DR -- Try running stable/9 instead of 9.1-RELEASE. > If the above doesn't work for you, the other alternative is to switch from using mountd to nfse, which can be found on sourceforge. rick > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB | > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat May 4 21:39:00 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 36773F10; Sat, 4 May 2013 21:39:00 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 0F31D164; Sat, 4 May 2013 21:39:00 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r44LcxvA080505; Sat, 4 May 2013 21:38:59 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r44LcxwC080504; Sat, 4 May 2013 21:38:59 GMT (envelope-from linimon) Date: Sat, 4 May 2013 21:38:59 GMT Message-Id: <201305042138.r44LcxwC080504@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/178329: [zfs] extended attributes leak X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 21:39:00 -0000 Synopsis: [zfs] extended attributes leak Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sat May 4 21:38:43 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=178329 From owner-freebsd-fs@FreeBSD.ORG Sat May 4 22:10:02 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 247CA76C for ; Sat, 4 May 2013 22:10:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id F20B5285 for ; Sat, 4 May 2013 22:10:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r44MA19w085826 for ; Sat, 4 May 2013 22:10:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r44MA1V9085825; Sat, 4 May 2013 22:10:01 GMT (envelope-from gnats) Date: Sat, 4 May 2013 22:10:01 GMT Message-Id: <201305042210.r44MA1V9085825@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Adam Nowacki Subject: Re: kern/178329: [zfs] extended attributes leak X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Adam Nowacki List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 22:10:02 -0000 The following reply was made to PR kern/178329; it has been noted by GNATS. From: Adam Nowacki To: Andriy Gapon Cc: bug-followup@FreeBSD.org Subject: Re: kern/178329: [zfs] extended attributes leak Date: Sun, 05 May 2013 00:00:36 +0200 On 2013-05-04 19:26, Andriy Gapon wrote: > Can not reproduce with head code. Appears to be fixed in 9.1-STABLE too. But the problem of already leaked objects remain: 1) zpool scrub will not delete leaked objects, 2) zfs send will include leaked objects. Maybe patch scrub to detect and remove leaked objects (sysctl flag disabled by default)? From owner-freebsd-fs@FreeBSD.ORG Sat May 4 23:33:15 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AF3AF109 for ; Sat, 4 May 2013 23:33:15 +0000 (UTC) (envelope-from mike@bayphoto.com) Received: from mx.got.net (mx5.mx3.got.net [207.111.237.44]) by mx1.freebsd.org (Postfix) with ESMTP id 9766B6B6 for ; Sat, 4 May 2013 23:33:14 +0000 (UTC) Received: from [192.168.1.118] (c-71-198-189-199.hsd1.ca.comcast.net [71.198.189.199]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx.got.net (mx1.mx3.got.net) with ESMTP id A525313F77 for ; Sat, 4 May 2013 16:33:13 -0700 (PDT) Message-ID: <51859AB9.6040804@bayphoto.com> Date: Sat, 04 May 2013 16:33:13 -0700 From: Mike Carlson User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs issue - disappearing data References: <5183F739.2040908@bayphoto.com> <51843D0E.2020907@platinum.linux.pl> <51845054.3020302@bayphoto.com> In-Reply-To: <51845054.3020302@bayphoto.com> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms000105080501020404010507" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: mike@bayphoto.com List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 May 2013 23:33:15 -0000 This is a cryptographically signed message in MIME format. --------------ms000105080501020404010507 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Just to update the list and close this thread, this issue as it turns=20 out was not ZFS related, but the application that read/writes to the=20 CIFS share. Good to know about the leaking extended attributes though, and of=20 course, the great responses from Adam and Jeremy. Thanks again, Mike C On 5/3/2013 5:03 PM, Mike Carlson wrote: > Interesting. > > Is that why zdb shows so many objects? > > Is this a configuration mistake, and would it lead to data loss? > > Can I provide any addition information ? > > Mike C > > On 5/3/2013 3:41 PM, Adam Nowacki wrote: >> Looks like we have a leak with extended attributes: >> >> # zfs create -o mountpoint=3D/test root/test >> # touch /test/file1 >> # setextattr user test abc /test/file1 >> # zdb root/test >> Object lvl iblk dblk dsize lsize %full type >> 8 1 16K 512 0 512 0.00 ZFS plain file >> 9 1 16K 512 1K 512 100.00 ZFS directory >> 10 1 16K 512 512 512 100.00 ZFS plain file >> >> object 8 - the file, >> object 9 - extended attributes directory, >> object 10 - value of the 'test' extended attribute >> >> # rm /test/file1 >> # zdb root/test >> >> Object lvl iblk dblk dsize lsize %full type >> 10 1 16K 512 512 512 100.00 ZFS plain file >> >> objects 8 and 9 are deleted, object 10 is still there (leaked). >> >> On 2013-05-03 19:43, Mike Carlson wrote: >>> We had a critical issue with a zfs server that exports shares via sam= ba >>> (3.5) last night >>> >>> system info: >>> uname -a >>> >>> FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD=20 >>> 9.1-RELEASE >>> #0 r243825: Tue Dec 4 09:23:10 UTC 2012 >>> root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 >>> >>> zpool history: >>> >>> History for 'data': >>> 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop >>> /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop >>> 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop >>> /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop >>> 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop >>> /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop >>> 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop >>> /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop >>> 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop >>> /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop >>> 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop >>> /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop >>> 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop >>> /dev/gpt/disk26.nop >>> 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop >>> 2013-02-25.17:12:19 zfs set checksum=3Dfletcher4 data >>> 2013-02-25.17:12:22 zfs set compression=3Dlzjb data >>> 2013-02-25.17:12:25 zfs set aclmode=3Dpassthrough data >>> 2013-02-25.17:12:30 zfs set aclinherit=3Dpassthrough data >>> 2013-02-25.17:13:25 zpool export data >>> 2013-02-25.17:15:33 zpool import -d /dev/gpt data >>> 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop >>> 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW >>> 2013-03-27.12:05:42 zfs create data/IMAGEQUIX >>> 2013-03-27.13:32:54 zfs create data/ROES_ORDERS >>> 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES >>> 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES >>> 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE >>> >>> We had a file structure drop off: >>> >>> /data/XML_WORKFLOW/XML_ORDERS/ >>> >>> around 5/2/2012 @ 17:00 >>> >>> In that directory, there were a few thousand directories (containing >>> images and a couple metadata text/xml files) >>> >>> What is odd, is doing a du -h in the parent XML_WORKFLOW directory,=20 >>> only >>> reports ~150MB: >>> >>> # find . -type f |wc -l >>> 86 >>> # du -sh . >>> 130M . >>> >>> >>> however, df reports 1.5GB: >>> >>> # df -h . >>> Filesystem Size Used Avail Capacity Mounted on >>> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW >>> >>> zdb -d shows: >>> >>> # zdb -d data/XML_WORKFLOW >>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, >>> 212812 objects >>> >>> Digging further into zdb, the path is missing for most of those=20 >>> objects: >>> >>> # zdb -ddddd data/XML_WORKFLOW 635248 >>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G, >>> 212812 objects, rootbp DVA[0]=3D<5:b274264000:2000> >>> DVA[1]=3D<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE >>> contiguous unique double size=3D800L/200P birth=3D1202311L/120231= 1P >>> fill=3D212812=20 >>> cksum=3D16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5292b >>> >>> Object lvl iblk dblk dsize lsize %full type >>> 635248 1 16K 512 6.00K 512 100.00 ZFS plain file >>> 168 bonus System attributes >>> dnode flags: USED_BYTES USERUSED_ACCOUNTED >>> dnode maxblkid: 0 >>> path ??? >>> uid 11258 >>> gid 10513 >>> atime Thu May 2 17:31:26 2013 >>> mtime Thu May 2 17:31:26 2013 >>> ctime Thu May 2 17:31:26 2013 >>> crtime Thu May 2 17:13:58 2013 >>> gen 1197180 >>> mode 100600 >>> size 52 >>> parent 635247 >>> links 1 >>> pflags 40800000005 >>> Indirect blocks: >>> 0 L0 3:a9da05a000:2000 200L/200P F=3D1 B=3D1197391/1197391 >>> >>> segment [0000000000000000, 0000000000000200) size 512 >>> >>> The application that writes to this volume runs on a windows client, = so >>> far, it has exhibited identical behavior across two zfs servers, but = >>> not >>> on a generic windows server 2003 network share. >>> >>> The question is, what is happening to the data. Is it a samba issue? = Is >>> it ZFS? I've enabled the samba full_audit module to track file >>> deletions, so I should have more information on that side. >>> >>> If anyone has seen similar behavior please let me know >>> >>> Mike C >> > > --------------ms000105080501020404010507 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIITIjCC BhwwggQEoAMCAQICCGWP9DPVdj3ZMA0GCSqGSIb3DQEBCwUAMEsxDzANBgNVBAMMBlJvb3RD QTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UE BhMCVVMwHhcNMTIxMDIzMTczMjE4WhcNMjcxMDIzMTczMjE4WjBYMRwwGgYDVQQDDBNCYXkg UGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMwEQYDVQQIDApDYWxp Zm9ybmlhMQswCQYDVQQGEwJVUzCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALuW vZE7FFSO77jEhdT/dZzu0XnPS35aepuvMvCNTagnNMbaf+AKrLZFaVRqBt4f6yl5TDXNEAMI +CJrg7T9dsoGN1Vygm4bIFzOIRtTZ1A7seemeonyqMto8gYMPhEFicuKavEAFcCFprVcVu5v yXbEG+tYkx08oUzeZ9LWZnh4a0wxAucbH0PcZFmdXFMbgjt6GSiR8jVUT1t/Dik1+mJ1A4K3 3VyM+G1CaqGIKg2UbvHiVQVoJrpgUc6vO5aWWgDreH+CAKLpyJwCj4BGphfHBPENu5LUp0CL fa4pA4r+OH0HNBp/I7utZMrIevh3pya34lDJXj0CQWSuVBR9Kp80IHHYGuyPU/sMXkUplsyI Y4KgQSRYRFNdWoboLwsRu1/2iDUaTR+XYH7Tu5zvCOedUm8ns/wfE7TW6kax5nR36VzuZS6B kUcuETO1QMxt2SJcHZIftiyTuxC8Y9Z7J6igR39pZPWGywJ2+oPRKFQD/u9RWZqjh9k5euVi 4bVwwog5DqnsymMTuKb0I1nCs5js+rfEVo+mc4gAfEmzk/dmpwkOnGV3oTfASbGtwMIlAEe5 HHJhdTA3jpVo3eQAZi479xZ7SozKvnjt1y9SMQ4uTJXjER/IVH3YC9S/BSFrqWLLDfWEGW84 SBn+ogldDIy3PX3SU7nOiN4Vse2Jg+r7AgMBAAGjgfYwgfMwHQYDVR0OBBYEFC34wnK9ZmLm B7vV52IeLHaapJ33MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAU00bJTnGPaRmhJAUZ t4x4J3soVyswgY8GA1UdHwSBhzCBhDCBgaB/oH2Ge2h0dHA6Ly9iYXljYS5iYXlob3RvLmxv Y2FsL2VqYmNhL3B1YmxpY3dlYi93ZWJkaXN0L2NlcnRkaXN0P2NtZD1jcmwmaXNzdWVyPUNO PVJvb3RDQSxPPUJheSUyMFBob3RvJTIwTGFiLFNUPUNhbGlmb3JuaWEsQz1VUzAOBgNVHQ8B Af8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBAEoDTeb1wVXz2VqFPrc3uaRnbVt6DCB9uBft L8YRgi5+FF5KHjvTqYHGiS38yVFfXM6MaN6yI7WSwtS+wVhM7Tdw6miwKNBApmCDKzjorNVX iZUmv88fjGm652LeIfT/oFGJg2oruoad9OfIofjT/jFpiFT7KOgjg4xeKCC0f+9naVo5uWOJ HTyFm1Kq6ryT+g7mZb7kLvndmJRW0aybn8Dls7/bhThEaOYkwn4dbZ6q0W2I5ZWg9bbh66V+ J9P1XObZA/TkRVTlpSYJ1hAgYX1yTbL360vuTAAwdSWQSFm4TAoXC4BsPdlgy86lDjNrW9id WX0RJCBzk/FzHhI4Aj9+SVhLl/Vkf9nbz8VhPPK2Az0PPYKy5ARtgOqGTKZbgVKut9Kgc/fn vUgPhObHQsD1Ui+NKsIGYBFKBhNmVXqMQSn4JzC9x0oDEmv37UGrcut2cCP3ZS17p137VaUM lQ0RWomju+sPCPFgyCPa/TLPoMZ2334uIxkRbDefvOoXIosORMQ9Jh50XqktUesbhuBfH9Q5 8h8bTWm1Cn/LxXW9qdSSbnta0OAH1G4hwVUlcusSM0o7Ude8tszw6kRpmEDDE8BQjE5nXSY7 wOf1eXfQfDkVQiouTD3l5ElqB98tnCnL/y9dphWoBLmiJwgb/4yWZ/Zewc1V65UFr7LmvcQM MIIGIzCCBAugAwIBAgIIYpSXgZOT7j0wDQYJKoZIhvcNAQELBQAwWDEcMBoGA1UEAwwTQmF5 IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2Fs aWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTgwMDAzWhcNMTQxMDIzMTgwMDAzWjBg MRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNvbjELMAkGA1UE CwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIICIjANBgkqhkiG 9w0BAQEFAAOCAg8AMIICCgKCAgEAoTzIvF66A3wYPWXQFzmjBUuIrHSQGhOcE7rb0KQtar+H rkmHX56bCAklW5P/pd+yJ08lMwb3CxbTOz0q47JuBv2kKJO1xCgCua26Uvz3VAmfirmWwpXq zZBDqy/bEIt/XFfiVUC4jriGSEPrtx9q9nJJsb2JVRgtsbcHaaJFu8u8s8p8cLbcYdKobS00 g6+7it2IpIJhxc5tEMa1Yku3kCQiHVVFa9b4H5pFDHpkCrKZ43cuCneiR5kgr47z/3U66kLt J7Q5IT/i7nThjGQMa/f1JSWet8yeTomKvqkuEAA4o/IWQzEbtxzeps6vWxaCDULjEq69s//S 6PtqiQSmG9ZGFoPYD8/GGd4CMBqgjKopintD5sGTlJ851yZwl9VY/hRuxInp8gTjWrt1gQIB zlgSgSKnKTN6f+e85XMPU1y/wVz8RJWl8Tr11kzo6vrM14+ruNUxo1Ea3PJ9MUcWenoRKGSU I/IP94kZVjPkZlJv6tTF0Yi2Gclet/ZDu8vgvkxmUZYdQMGGlgZTCAsvHr37/ov6g51Tf+im 7410EsdYCmSINRGzWQAzlH9NscsW1TAd1Znog1H6NRDExY3ksjvFcKYOjUmkyWT8Vl2oJmT2 IzI23/C3esGL9OZzZ6K84MRNrH1y/yNp75vQnP3JfDMpbb5kkDp95Bu365qBluECAwEAAaOB 6DCB5TBSBggrBgEFBQcBAQRGMEQwQgYIKwYBBQUHMAGGNmh0dHA6Ly9iYXljYS5iYXlob3Rv LmxvY2FsL2VqYmNhL3B1YmxpY3dlYi9zdGF0dXMvb2NzcDAdBgNVHQ4EFgQUzeso+31hmtp3 soKHShXXtAEo+iMwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQt+MJyvWZi5ge71ediHix2 mqSd9zAOBgNVHQ8BAf8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ER bWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcNAQELBQADggIBADnQfCasVgMsKsxIZAOZCbsU xo9BfsbpoM02p2aP+vPNDLXYRmcnH6ReeeUKSfIn0HmS9XkeHizMEXaC5sV9g4dasdQJQOGU mDcBnlxGn5fzNVFBM7/RHL83waYq4MCeyP9M7lSiNFZTrnSLVL9lIO0FLrBE06c9bn09kExc zkXI6Qm+e/MNrnoC3vw3GbH3a7tZCPsQcyNSok99jgPTRb1g9uVPg25M+ScYMU0wv2BE24u1 Dfzwcq52h64TllbzdVg/qOQH1HM96wmU+CtPuzA6eYnWRao/80LfQcyhNZ/jfMB/9xwFwsam o3Bw7SrSPEatw/tMyEEVMzas5/wZm2uMtab7642d5mr5OWLVPYgmKUscSlNt87vKkFhvn0Cz Z7O8O79WNMJA0sx1aomn1/ZrWDkd8X/ACUC2Fa3cV4AAzmjytiNu7r2z+GwdXPmvWSlBDXKX wLSoRkdq5hmYAP3GwXF0dsZo63WJLuCU1bPyERNLKdZM//eX832WgomPs4FA4xg0MUH0S7vJ eo7K1cTutZEmyLT623p0GcOINs2ir/ZqPTDLKszI7ytAltYaATt4kYUXbmMGGYItDf1X/caj DoLv2hjBTM5HORZYABC/Kfo9iL4KeYDqAvblJc7qyw+QXdHOUbwc9gQXQJvlQlfjDYvJLKme zoZ1sMzRBOl0MIIG1zCCBL+gAwIBAgIIGE38aUOyx8EwDQYJKoZIhvcNAQELBQAwWDEcMBoG A1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQGA1UECgwNQmF5IFBob3RvIExhYjETMBEG A1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMwHhcNMTIxMDIzMTc1NzQ1WhcNMTQxMDIz MTc1NzQ1WjBgMRUwEwYKCZImiZPyLGQBAQwFMTMwNjkxFTATBgNVBAMMDE1pa2UgQ2FybHNv bjELMAkGA1UECwwCSVQxFjAUBgNVBAoMDUJheSBQaG90byBMYWIxCzAJBgNVBAYTAlVTMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAuECpC3YUm7GV0xz/DMmMQZ3EvMfvVhFB 77TcVxY1OoBlp2jk0ST0Hel/vp9uPhhi/eAlH89rC9fhNwORUHfeofWhoT3ZXrnjisNDQnb5 MgBV6wVM58PrikwU13FuNrIrPuUeuUE659BQhfgj2j2Pv9GVgBib6NBbHJAIOFo8H3wmd4b9 Yv6RKM145qSJCrJp96jSkoQSwni+jraHGMs55kgFP/3f0X6RpY7GhvItFI95Xauf7R5qTuW0 oamhvjFnlWVBoMuHd5yqzUgL7gQ0wfB31wfBP2ghFPAv715Qv9DYleFmgWE1LRkrt8clCzzk huj00CrEL+rSK9bDYNpa5AdCQ8aA/bG2x24ApOBvfDYnem+Ytu9lIY7qKZiP+9pASOiXsSSt g8CH9HoG+9GpK/rDyQ2nuNxTWEQEetwofkLdrGU/VgpcwIZqETsugu+l/FCu3Ogslezpiolb SqjCF5CU3aZravNZo4+HDriFKm7jyKEb+zUcXcwNH2iwbGt6uTMILslXbWXuhi8Lu3SKyacj 1gn9OMPQTCUKxaiTJXqIcJP0CV9wKcgTn7vanChDgz2fTVm6HOzb2RZlyhYzU4ofuMk+VGbg tBSDns+B9D1ACZMd58D2XdqRvmGPlnbJEZPkAZyDiR4mp/Aw09uXzOCIHzvqLhiysGon7lkV a18CAwEAAaOCAZswggGXMFIGCCsGAQUFBwEBBEYwRDBCBggrBgEFBQcwAYY2aHR0cDovL2Jh eWNhLmJheWhvdG8ubG9jYWwvZWpiY2EvcHVibGljd2ViL3N0YXR1cy9vY3NwMB0GA1UdDgQW BBRGT4gPK1JkpmK4YAsQgD82sskTGDAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFC34wnK9 ZmLmB7vV52IeLHaapJ33MIGlBgNVHR8EgZ0wgZowgZeggZSggZGGgY5odHRwOi8vYmF5Y2Eu YmF5aG90by5sb2NhbC9lamJjYS9wdWJsaWN3ZWIvd2ViZGlzdC9jZXJ0ZGlzdD9jbWQ9Y3Js Jmlzc3Vlcj1DTj1CYXklMjBQaG90byUyMFBlb3BsZSUyMENBLE89QmF5JTIwUGhvdG8lMjBM YWIsU1Q9Q2FsaWZvcm5pYSxDPVVTMA4GA1UdDwEB/wQEAwIGwDAdBgNVHSUEFjAUBggrBgEF BQcDAgYIKwYBBQUHAwQwHAYDVR0RBBUwE4ERbWlrZUBiYXlwaG90by5jb20wDQYJKoZIhvcN AQELBQADggIBAC+1BfV1bmcHZtgM4HN5rZek1vzY2UBLTHevnLDsLmNNASYMuOrm1TYMCI0/ LVkGltk0kq3lAP3hECsrSeH+SlmoRAmSCEOAwqOBU5oTqV/SMiQcZb6K2o/QUAVV6Si2KMyW U35hnqi9/vVNq4y28RjvMAydcrJpmn5qrWsy1ZLbW+Zu3jneBYjUcm4mQnqaCyjRyE2Z3byi /4m1sxISSdyqoRsWKsixSdb4bXUdNeNscltRYDMEDODtzYjnO7WebHtaMAEOyfE3n2gkPmGr dBopA6/Z53GVkG+MXF1wSlc3g38dHSpbY6qJiMkl8/wKCnkcMRPC3hRGQikYprh/oAbpHAyS q9P0jv/3cB0yzZ5b3n49PxqjwyH93lcRZJQ5pAMMWFk1B7IuYk9LVbJEVsPAmxFbWoS87Tn0 /zi6lkUNXoSgm4fJ+NBY5jnk1m48nPFIs7dpr8wW8qT2C0DiQz/tSKMramzsBduXY67djCZ5 cam1H5pRuaiEPEmrafC91i+wCsmjpMoqykKkuiGOlmY8LhRSZZ7M8qwtpo/0yur94z1Z5gyG 3yo/vZQtNDt8jaJ2ajGR50C48CtJufrckT3o3UrtjvQ3jCUnak2VbXIV6QNTio9NQKVH4+fL VhoGvc/PC/CifBs6Qyfk3a6nHdVfzwGx8Uytj/Br0AV0p8Y2MYIERjCCBEICAQEwZDBYMRww GgYDVQQDDBNCYXkgUGhvdG8gUGVvcGxlIENBMRYwFAYDVQQKDA1CYXkgUGhvdG8gTGFiMRMw EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUwIIGE38aUOyx8EwCQYFKw4DAhoFAKCC AbcwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwNTA0MjMz MzEzWjAjBgkqhkiG9w0BCQQxFgQUMOaJuYr1zqRRis9UygH/n2BV/HcwbAYJKoZIhvcNAQkP MV8wXTALBglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMC AgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDBzBgkrBgEEAYI3 EAQxZjBkMFgxHDAaBgNVBAMME0JheSBQaG90byBQZW9wbGUgQ0ExFjAUBgNVBAoMDUJheSBQ aG90byBMYWIxEzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTAghilJeBk5PuPTB1 BgsqhkiG9w0BCRACCzFmoGQwWDEcMBoGA1UEAwwTQmF5IFBob3RvIFBlb3BsZSBDQTEWMBQG A1UECgwNQmF5IFBob3RvIExhYjETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMC CGKUl4GTk+49MA0GCSqGSIb3DQEBAQUABIICAB7HJxHc5viiC+moreff63icWktFInk+jDvA wtHdqZ+u8aNFBSf6sY92yIEHuAmqL59aTRP3ovxFvg1j/WTpoySMFBv4eZ7E33u171EHO/un YensoBlQ8Miea6AlZfEcr9eh8Ehw5I0Mfz0Z3PNWi50LVJLEl8aXtMtwwQSWV5IPssY7EkFO TOf0yPJIFH+frwv0JE6i0enjWS+7RDirWG7kLuQiDQLi2J4T86P/sZpuvrit3mNyEnrFmm7W 1hp8SgcGZyT2uLwcMnf8OY09MJV/LJIbjOcFIfYqocHe8SfNreXfT4p/qVSiInfllnPl5J0r JadmoXoFIMW+pw20qkmvNhYut+93RhWJm8FnD98zUQqYx2bzv+1rtOvlp7lLD4f7QsN9TdSB LsmQH4JR9HCRA1BhK4UM8yh7FbETEG7Z9PyAHAj5c8oDPUeBt1YvymkgMVv9twyWAHFiRM3l GMwJXEraVtE5RYbRtfHgFg7TVgWgMXggW13UvWEX72iytC1tn9IzEe17a7EMqeVm/kB79AK5 1bMx/eDxZ1k4wmRociC1922yhzjadyPYZdq/1WGMU2FzcTYljZs3v9rOig+W0qmt5c2B+b0v +NLmzDjBbZ9bRjVFYrUhwAAJpsulyAX7hKPfwMLY+t2DRchfU82VtggP6j1qcNMM5I72N0H2 AAAAAAAA --------------ms000105080501020404010507--