From owner-freebsd-fs@FreeBSD.ORG Mon Jan 12 07:09:17 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 91A0DC7 for ; Mon, 12 Jan 2015 07:09:17 +0000 (UTC) Received: from nm44-vm5.bullet.mail.ne1.yahoo.com (nm44-vm5.bullet.mail.ne1.yahoo.com [98.138.120.245]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 33432ED4 for ; Mon, 12 Jan 2015 07:09:16 +0000 (UTC) Received: from [127.0.0.1] by nm44.bullet.mail.ne1.yahoo.com with NNFMP; 12 Jan 2015 07:09:09 -0000 Received: from [98.138.100.115] by nm44.bullet.mail.ne1.yahoo.com with NNFMP; 12 Jan 2015 07:06:14 -0000 Received: from [216.39.60.166] by tm106.bullet.mail.ne1.yahoo.com with NNFMP; 12 Jan 2015 07:06:14 -0000 Received: from [216.39.60.238] by tm2.access.bullet.mail.gq1.yahoo.com with NNFMP; 12 Jan 2015 07:06:14 -0000 Received: from [127.0.0.1] by omp1009.access.mail.gq1.yahoo.com with NNFMP; 12 Jan 2015 07:06:14 -0000 X-Yahoo-Newman-Property: ymail-4 X-Yahoo-Newman-Id: 822473.59789.bm@omp1009.access.mail.gq1.yahoo.com Received: (qmail 39545 invoked by uid 60001); 12 Jan 2015 07:06:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pacbell.net; s=s1024; t=1421046374; bh=UfB+Xhj3LsEOFwWU561wATj2ToHyZKdTxdxP8aB4IR8=; h=Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=hW1ibaGdzdWqg+JnhEMhg+7WvcZCcoEyTKQvYGHVwy0zNCekqeZ2NlOLwOlnIvsVQwZu7M9cUpBGXXDFTIHwU1JKG7gpY1bRVOGplRKv/vFG8EI5X+hJillSebUug1Koxe2yViRjh8qY7OyGLR+Qd6x9xM5AUgTmJ0cSHk3HIHU= X-YMail-OSG: 0IrbMCIVM1mBlZVYXLfBk6s4S8bAsQUiPc8UYDzJlhAChcg Ek0S1N4DWTrVeKkYHgwN3AX6fM_vFy6gXfk.72LE8mSoGkwerkMSGp_YoCUa omOgRVl.MANnVp8dANJPNpSfY.FN1b0FXRWz3fVq7inzED4HqBWXRi9wgb.U SJFPoZEMaHq8k3wQ8IGJ_YpCiNGnds4YcUUUTh9lCoz2I852QoCV_ZszFam5 23FrwH8ovgZ5z_HGboeN8KOV5isUMZfxLSJbm_LxJWzSitbDAvxRgc_R2uiK LpyFqMi5amOQT4MQVoA9bFIZtbPnL_X_97p_wla_lIYa32_KWpCr8CfcbJev HP2jy9K1TiuJuiqz7zLKOLmRPWMDQI2_RARPfO3UmYur8SP9akauHsfOiHtf MmZvRVkBVIzwD8iCZ0W8kskDxAj2879DcmWl84qkhMwWnHICNvf0NSfmt9wj 01IESI1TGZhhluDjntbkKfuZTNwplBV5pRewlU0.P2V.5O7G9lx24OBHVfKb 2QtFSWGq8eqH3I_gTx7tAPRJLiOqA2AvLXvxTKd_m8_aOQ_3jl3PiUyJtRr3 Jf7J9oVgwJMQIjOgN8L41YVQyov.gel0- Received: from [64.175.42.180] by web181605.mail.ne1.yahoo.com via HTTP; Sun, 11 Jan 2015 23:06:14 PST X-Rocket-MIMEInfo: 002.001, VGhlIHN5c3RlbSBoYXMgYSBaRlMgcm9vdCBkcml2ZS4gSSB1cGdyYWRlZCBmcm9tIDguMyAoYW1kNjQpIHRvDQoxMC4xLiBFdmVyeXRoaW5nIHdlbnQgc21vb3RobHkgYnV0IHpwb29sL3pmcyB1cGdyYWRlIHJlY29tbWVuZGVkDQp1cGdyYWRpbmcgdGhlIHJvb3QgZHJpdmUuIFpwb29sIHVwZ3JhZGUgLWEgd29ya2VkIGZpbmUgYnV0DQp6ZnMgdXBncmFkZSAtYSBjYXVzZWQgdGhlIHN5c3RlbSB0byBmcmVlemUgZm9yIGEgZmV3IHNlY29uZHMNCnRoZW4gcmVib290LiBUaGVyZSB3YXMgbm8gcGFuaWMgb3IgY3IBMAEBAQE- X-Mailer: YahooMailClassic/905 YahooMailWebService/0.8.203.740 Message-ID: <1421046374.36714.YahooMailBasic@web181605.mail.ne1.yahoo.com> Date: Sun, 11 Jan 2015 23:06:14 -0800 From: Simon Walton Subject: Crash with zfs upgrade -a To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jan 2015 07:09:17 -0000 The system has a ZFS root drive. I upgraded from 8.3 (amd64) to 10.1. Everything went smoothly but zpool/zfs upgrade recommended upgrading the root drive. Zpool upgrade -a worked fine but zfs upgrade -a caused the system to freeze for a few seconds then reboot. There was no panic or crash log. The system still boots fine but the filesystems report as version 1. Should I attempt to upgrade them again? Thanks, Simon From owner-freebsd-fs@FreeBSD.ORG Mon Jan 12 20:20:13 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E73623BB for ; Mon, 12 Jan 2015 20:20:13 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A25AA7E9 for ; Mon, 12 Jan 2015 20:20:13 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1YAlT0-0005Bh-O7 for freebsd-fs@freebsd.org; Mon, 12 Jan 2015 21:20:03 +0100 Received: from dynamic34-29.dynamic.dal.ca ([129.173.34.203]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 12 Jan 2015 21:20:02 +0100 Received: from jrm by dynamic34-29.dynamic.dal.ca with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Mon, 12 Jan 2015 21:20:02 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Joseph Mingrone Subject: memory exhaustion on 10.1 AMD64 ZFS storage system Date: Mon, 12 Jan 2015 15:56:21 -0400 Lines: 225 Message-ID: <868uh7ydqy.fsf@gly.ftfl.ca> Mime-Version: 1.0 Content-Type: text/plain X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: dynamic34-29.dynamic.dal.ca User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (berkeley-unix) Cancel-Lock: sha1:1yhTf+mDn+WDk/Rr7m4fY+WRvG0= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jan 2015 20:20:14 -0000 Hello, We've had this storage system running 9.x without problems. After upgrading to 10.1 we've seen "out of swap space" messages in the logs. Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was killed: out of swap space ... Jan 11 23:23:51 storage2 kernel: pid 642 (mountd), uid 0, was killed: out of swap space What's the best way to determine if this is a ZFS problem? I've read in the 10.1 release notes that vfs.zfs.zio.use_uma has been re-enabled. Has this caused anyone problems with 10.1? Below is information about the server. Joseph # cat /boot/loader.conf zfs_load=YES vfs.root.mountfrom="zfs:zroot" vfs.zfs.arc_max=24G # zfs-stats -F ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:52:21 2015 ------------------------------------------------------------------------ System Information: Kernel Version: 1001000 (osreldate) Hardware Platform: amd64 Processor Architecture: amd64 FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014 root 3:52PM up 30 mins, 1 user, load averages: 0.14, 0.15, 0.14 ------------------------------------------------------------------------ # zfs-stats -M ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:52:56 2015 ------------------------------------------------------------------------ System Memory Statistics: Physical Memory: 32706.64M Kernel Memory: 164.14M DATA: 84.30% 138.38M TEXT: 15.70% 25.76M ------------------------------------------------------------------------ # zfs-stats -p ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:53:20 2015 ------------------------------------------------------------------------ ZFS pool information: Storage pool Version (spa): 5000 Filesystem Version (zpl): 5 ------------------------------------------------------------------------ # zfs-stats -A ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jan 12 15:53:43 2015 ------------------------------------------------------------------------ ARC Misc: Deleted: 20 Recycle Misses: 0 Mutex Misses: 0 Evict Skips: 0 ARC Size: Current Size (arcsize): 0.17% 40.87M Target Size (Adaptive, c): 100.00% 24576.00M Min Size (Hard Limit, c_min): 12.50% 3072.00M Max Size (High Water, c_max): ~8:1 24576.00M ARC Size Breakdown: Recently Used Cache Size (p): 50.00% 12288.00M Freq. Used Cache Size (c-p): 50.00% 12288.00M ARC Hash Breakdown: Elements Max: 1583 Elements Current: 100.00% 1583 Collisions: 0 Chain Max: 0 Chains: 0 ARC Eviction Statistics: Evicts Total: 172032 Evicts Eligible for L2: 97.62% 167936 Evicts Ineligible for L2: 2.38% 4096 Evicts Cached to L2: 0 ARC Efficiency Cache Access Total: 44696 Cache Hit Ratio: 95.38% 42632 Cache Miss Ratio: 4.62% 2064 Actual Hit Ratio: 85.21% 38084 Data Demand Efficiency: 97.50% Data Prefetch Efficiency: 8.51% CACHE HITS BY CACHE LIST: Anonymously Used: 10.67% 4548 Most Recently Used (mru): 39.98% 17044 Most Frequently Used (mfu): 49.35% 21040 MRU Ghost (mru_ghost): 0.00% 0 MFU Ghost (mfu_ghost): 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 48.37% 20619 Prefetch Data: 0.01% 4 Demand Metadata: 40.97% 17465 Prefetch Metadata: 10.66% 4544 CACHE MISSES BY DATA TYPE: Demand Data: 25.63% 529 Prefetch Data: 2.08% 43 Demand Metadata: 52.18% 1077 Prefetch Metadata: 20.11% 415 ------------------------------------------------------------------------ # zpool list NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT tank 24.5T 11.1T 13.4T 14% - 45% 1.00x ONLINE - zroot 55.5G 6.11G 49.4G 5% - 11% 1.00x ONLINE - # zpool get "all" tank NAME PROPERTY VALUE SOURCE tank size 24.5T - tank capacity 45% - tank altroot - default tank health ONLINE - tank guid 8322714406813719098 default tank version - default tank bootfs - default tank delegation on default tank autoreplace off default tank cachefile - default tank failmode wait default tank listsnapshots off default tank autoexpand off default tank dedupditto 0 default tank dedupratio 1.00x - tank free 13.4T - tank allocated 11.1T - tank readonly off - tank comment - default tank expandsize 0 - tank freeing 0 default tank fragmentation 14% - tank leaked 0 default tank feature@async_destroy enabled local tank feature@empty_bpobj enabled local tank feature@lz4_compress active local tank feature@multi_vdev_crash_dump enabled local tank feature@spacemap_histogram active local tank feature@enabled_txg active local tank feature@hole_birth active local tank feature@extensible_dataset enabled local tank feature@embedded_data active local tank feature@bookmarks enabled local tank feature@filesystem_limits enabled local # zdb -C tank MOS Configuration: version: 5000 name: 'tank' state: 0 txg: 12614760 pool_guid: 8322714406813719098 hostid: 1722087693 hostname: 'storage2.mathstat.dal.ca' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 8322714406813719098 children[0]: type: 'raidz' id: 0 guid: 5865699514822950384 nparity: 3 metaslab_array: 31 metaslab_shift: 37 ashift: 12 asize: 27005292380160 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 6285638336980483158 path: '/dev/label/storage_disk0' phys_path: '/dev/label/storage_disk0' whole_disk: 1 DTL: 106 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 9541693314532360771 path: '/dev/label/storage_disk1' phys_path: '/dev/label/storage_disk1' whole_disk: 1 DTL: 105 create_txg: 4 children[2]: type: 'disk' create_txg: 4 [63/2837] children[0]: type: 'disk' id: 0 guid: 310723121207304329 path: '/dev/gpt/disk0' phys_path: '/dev/gpt/disk0' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 16696203411283061195 path: '/dev/gpt/disk1' phys_path: '/dev/gpt/disk1' whole_disk: 1 create_txg: 4 From owner-freebsd-fs@FreeBSD.ORG Mon Jan 12 21:42:04 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 88BB16FC for ; Mon, 12 Jan 2015 21:42:04 +0000 (UTC) Received: from kerio.tuxis.nl (alcyone.saas.tuxis.net [31.3.111.19]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1A8BC1BB for ; Mon, 12 Jan 2015 21:42:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=tuxis.nl; s=mail; h=from:subject:date:message-id:to:mime-version:content-type: content-transfer-encoding:in-reply-to; bh=YGl43hc0868JYBP11PK0sX04BNIjCa/1dQs0apWtkIo=; b=XmJ2ZIIFBbhzPa6Rk1Kn+cC4vca6ofTczpNvkZ81zBINAP8um1+Bmhr2DtycK5gQfg/m6frXuCSv0 SGF8WZDp3Npai5VuLrWmrrEFLtg70/c55RXiDN5MJhI/S5iKn9Lk6iVI4JPdEUUk2yH1Tg7wTvcosK oQEb+fF/nt1fXvPuX/pwlsAuc5Idk9sdLMOKsj1J/GR+uzoFXSqGc+r/FRaCSp/dJV2seJYXBGmwgP SgyQ6KXW7l0jq+ORb4E1apFgrKN8ka2eYN3LXKFJM/RhPD3ep/nHTGjyP//FuqCdo5lE2eU9xE94MW GrRwve6W4NuMtp+bvr+BLhISq4IiknA== X-Footer: dHV4aXMubmw= Received: from [87.212.163.171] ([87.212.163.171]) by kerio.tuxis.nl (Kerio Connect 8.4.0); Mon, 12 Jan 2015 22:11:48 +0100 From: "Mark Schouten" Subject: Re: memory exhaustion on 10.1 AMD64 ZFS storage system To: "Joseph Mingrone" , freebsd-fs@freebsd.org Organization: Tuxis Internet Engineering In-Reply-To: <868uh7ydqy.fsf@gly.ftfl.ca> Message-ID: <20150112211148.15c1fe3c@kerio.tuxis.nl> Date: Mon, 12 Jan 2015 22:11:48 +0100 X-Mailer: Kerio Connect 8.4.0 WebMail X-User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Jan 2015 21:42:04 -0000 Hi, > We've had this storage system running 9.x without problems. After > upgrading to 10.1 we've seen "out of swap space" messages in the logs. >=20 > Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was kille= d: > out of swap space Do you have compression enabled and L2ARC=3F There's this bug that leaks= memory via L2ARC, causing all memory to run out, pushing out the ARC an= d (probably) causes crashes and lots of swap usage.. I think this is about that bug: https://github.com/freebsd/freebsd/commit/b98f85d480b770e34d5e08c66dbc66= 8bd5548bdc Mark From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 00:26:02 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3B66330D for ; Tue, 13 Jan 2015 00:26:02 +0000 (UTC) Received: from mail.ijs.si (mail.ijs.si [IPv6:2001:1470:ff80::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E10FA68F for ; Tue, 13 Jan 2015 00:26:01 +0000 (UTC) Received: from amavis-proxy-ori.ijs.si (localhost [IPv6:::1]) by mail.ijs.si (Postfix) with ESMTP id 3kLsyZ3q4RzZ9 for ; Tue, 13 Jan 2015 01:25:58 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ijs.si; h= user-agent:message-id:references:in-reply-to:organization :subject:subject:from:from:date:date:content-transfer-encoding :content-type:content-type:mime-version:received:received :received:received; s=jakla4; t=1421108755; x=1423700756; bh=WNe CHLrklcBwrs6UEY7xKY7Xfergi+Y/hJHFb6eJjNw=; b=UTM0TSh07WLKy6C0yFt eWi7+ConQNJpE6pNaoFPEvqa3jHIjUp1z0jfe3tmkudvo10zVOz9jZ4FOPTcuVAi PXpdb0eAyo9STt4BQajJSErey6sHVT97ndGdhi4jUxVT+X3m9psSnLTA87DpQWQb uhpXrGmOA7yUvT9lPVqclGhE= X-Virus-Scanned: amavisd-new at ijs.si Received: from mail.ijs.si ([IPv6:::1]) by amavis-proxy-ori.ijs.si (mail.ijs.si [IPv6:::1]) (amavisd-new, port 10012) with ESMTP id 0pmzIuIFyxIK for ; Tue, 13 Jan 2015 01:25:55 +0100 (CET) Received: from mildred.ijs.si (mailbox.ijs.si [IPv6:2001:1470:ff80::143:1]) by mail.ijs.si (Postfix) with ESMTP for ; Tue, 13 Jan 2015 01:25:55 +0100 (CET) Received: from neli.ijs.si (neli.ijs.si [IPv6:2001:1470:ff80:88:21c:c0ff:feb1:8c91]) by mildred.ijs.si (Postfix) with ESMTP id 3kLsyW0wHmzVd for ; Tue, 13 Jan 2015 01:25:55 +0100 (CET) Received: from sleepy.ijs.si ([2001:1470:ff80:e001::1:1]) by neli.ijs.si with HTTP (HTTP/1.1 POST); Tue, 13 Jan 2015 01:25:55 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Tue, 13 Jan 2015 01:25:55 +0100 From: Mark Martinec To: freebsd-fs@freebsd.org Subject: Re: memory exhaustion on 10.1 AMD64 ZFS storage system Organization: J. Stefan Institute In-Reply-To: <20150112211148.15c1fe3c@kerio.tuxis.nl> References: <20150112211148.15c1fe3c@kerio.tuxis.nl> Message-ID: <520c2cb181266d1f28d00dbf7929d591@mailbox.ijs.si> X-Sender: Mark.Martinec+freebsd@ijs.si User-Agent: Roundcube Webmail/1.0.4 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 00:26:02 -0000 Joseph Mingrone wrote: > We've had this storage system running 9.x without problems. After > upgrading to 10.1 we've seen "out of swap space" messages in the logs. > Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was > killed: > out of swap space [...] > What's the best way to determine if this is a ZFS problem? I've read > in > the 10.1 release notes that vfs.zfs.zio.use_uma has been re-enabled. > Has this caused anyone problems with 10.1? Aggressiveness of ARC in 10.0 hurt us pretty badly when switching from 9.2 to 10.0. The ARC / UMA greediness for memory was causing excessive swapping out of still active processes, while keeping ARC luxuriously bathing in memory. The situation in 10.1 may have been slightly improved, although it seems the fix still has not been committed: Bug 187594 - [zfs] [patch] ZFS ARC behavior problem and fix https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 I'd really like to see a solution committed in time for 10.2. Mark Schouten wrote: > Do you have compression enabled and L2ARC? There's this bug that leaks > memory via L2ARC, causing all memory to run out, pushing out the ARC > and (probably) causes crashes and lots of swap usage.. > > I think this is about that bug: > https://github.com/freebsd/freebsd/commit/b98f85d480b770e34d5e08c66dbc668bd5548bdc That too (unrelated to ARC / UMA greediness for memory). Mark From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 01:59:11 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 277261EE for ; Tue, 13 Jan 2015 01:59:11 +0000 (UTC) Received: from mail-ig0-x22c.google.com (mail-ig0-x22c.google.com [IPv6:2607:f8b0:4001:c05::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E5E39EFC for ; Tue, 13 Jan 2015 01:59:10 +0000 (UTC) Received: by mail-ig0-f172.google.com with SMTP id l13so1733316iga.5 for ; Mon, 12 Jan 2015 17:59:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=hCRjxK0bFXIT7EeXE2GHnNHxAxodw5wWgUoHS7zeQz0=; b=NLGbfmNzp5W7sbkAyzJPY15iZUb9hYfYgIPz4uQ6htlfW8zGGMc6F6HnJJIimqw1Ad s5wYwB/2WFOafpsoSu7BH/FyqWIQdlZgGIsnLneAUK9UFPrBhlXbwh5VH20+Y72MiSLr 2FXIu6awiBQPkc1ID59D57kVvAY2aGoWt+Zbb8pztAROVwrR964FkZNBdXqVkefq66rh gnADzC5M7gXovTDtlSPr0v558z69NO8kdgE2GJa68bY2M6ky+LeF9n0GQdzTrHRcfT2A 1zncdLls5mYK2R/svfaj0rgVg1YDYdR06uEFxjfi4S8BlNJDJbSVQWwL3i6Uos75ffff BSOg== MIME-Version: 1.0 X-Received: by 10.42.253.195 with SMTP id nb3mr26542260icb.34.1421114350285; Mon, 12 Jan 2015 17:59:10 -0800 (PST) Received: by 10.107.33.202 with HTTP; Mon, 12 Jan 2015 17:59:10 -0800 (PST) Date: Mon, 12 Jan 2015 20:59:10 -0500 Message-ID: Subject: mountd -h flag is not accepting hostnames From: Ryan Stone To: "freebsd-fs@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 01:59:11 -0000 It seems that there has been a regression between 8.1-RELEASE and 10.1-RELEASE in mountd. In 10.1 I can no longer run mountd with -h myhostname to have it bind to the IP that "myhostname" resolve to. The cause is that getaddrinfo() is not being called correctly. I've uploaded a fix for review here: https://reviews.freebsd.org/D1507 From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 10:54:06 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 800292A4 for ; Tue, 13 Jan 2015 10:54:06 +0000 (UTC) Received: from kirk-ext.obspm.fr (kirk-ext.obspm.fr [145.238.193.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.obspm.fr", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1E38AB41 for ; Tue, 13 Jan 2015 10:54:05 +0000 (UTC) Received: from pcjas.obspm.fr (pcjas.obspm.fr [145.238.184.233]) (authenticated bits=0) by kirk-ext.obspm.fr (8.14.4/8.14.4/DIO Observatoire de Paris - 15/04/10) with ESMTP id t0DAprbq012895 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 13 Jan 2015 11:51:54 +0100 Date: Tue, 13 Jan 2015 11:52:40 +0100 From: Albert Shih To: freebsd-fs@freebsd.org Subject: How many ram... Message-ID: <20150113105240.GA33162@pcjas.obspm.fr> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit User-Agent: Mutt/1.5.23 (2014-03-12) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.3.9 (kirk-ext.obspm.fr [145.238.193.20]); Tue, 13 Jan 2015 11:51:54 +0100 (CET) X-Virus-Scanned: clamav-milter 0.98.5 at kirk-ext.obspm.fr X-Virus-Status: Clean X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 10:54:06 -0000 Hi all, Basic question : How many amount of ram I need for running ZFS with a pool of ~500-600To. For example If I've a server with 2 disk array of 60 disk of 6To. Is the rule I find somewhere on Internet saying for ZFS 1To --> 1Go is still true ? What's happen If I use less ram ? Like 1To --> 0.5Go ? Less performance ? crash ? Regards. JAS -- Albert SHIH DIO bâtiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex France Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 xmpp: jas@obspm.fr Heure local/Local time: mar 13 jan 2015 11:49:16 CET From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 12:36:10 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1E66DA1D for ; Tue, 13 Jan 2015 12:36:10 +0000 (UTC) Received: from server.linsystem.net (server.linsystem.net [80.79.23.169]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D378788D for ; Tue, 13 Jan 2015 12:36:09 +0000 (UTC) Received: from ukc1-fw-1-v133-dip3.oracle.co.uk ([144.24.19.7] helo=localhost) by server.linsystem.net with esmtpa (Exim 4.72) (envelope-from ) id 1YB0Lj-0002Ux-TK; Tue, 13 Jan 2015 13:13:31 +0100 Date: Tue, 13 Jan 2015 13:04:02 +0100 From: Robert David To: Albert Shih Subject: Re: How many ram... Message-ID: <20150113130402.7b6468b6@linsystem.net> In-Reply-To: <20150113105240.GA33162@pcjas.obspm.fr> References: <20150113105240.GA33162@pcjas.obspm.fr> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.24; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 12:36:10 -0000 Hi Albert, it mainly depends how you will use this pool. If it would be some cold storage used only for slow backups, then somehow reasonable amount of ram is ok. Let say 32GB can work. The main problem is with performance, since ram affect zfs a lot (after warming period). If it will be some performance data storage, 256GB ram can be not that much. Also do not use deduplication on such pools or you will find any amount of memory too low. I would say, do not use deduplication on any pool, even the small ones. Regards, Robert.=20 On Tue, 13 Jan 2015 11:52:40 +0100 Albert Shih wrote: > Hi all, >=20 > Basic question : How many amount of ram I need for running ZFS with a > pool of ~500-600To. >=20 > For example If I've a server with 2 disk array of 60 disk of 6To. >=20 > Is the rule I find somewhere on Internet saying for ZFS 1To --> 1Go is > still true ? >=20 > What's happen If I use less ram ? Like 1To --> 0.5Go ? Less > performance ? crash ? >=20 > Regards. >=20 > JAS >=20 > -- > Albert SHIH > DIO b=C3=A2timent 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > France > T=C3=A9l=C3=A9phone : +33 1 45 07 76 26/+33 6 86 69 95 71 > xmpp: jas@obspm.fr > Heure local/Local time: > mar 13 jan 2015 11:49:16 CET > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 14:23:49 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0CE6A3CE for ; Tue, 13 Jan 2015 14:23:49 +0000 (UTC) Received: from mail-ig0-f170.google.com (mail-ig0-f170.google.com [209.85.213.170]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CC99474B for ; Tue, 13 Jan 2015 14:23:48 +0000 (UTC) Received: by mail-ig0-f170.google.com with SMTP id l13so3742605iga.1 for ; Tue, 13 Jan 2015 06:23:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=5dn/Do5moZOQ4924YfXNFnli6C9lnke6tSzxYQA0sO8=; b=kWXDT7seLg5bz5bSysHhdu2EwhuZ5P8DNnJwrTFdMK2O2DhUSV8wfSVBJ7noRITVFV 9Vw8hehlPwXMq9OsrQQdjwsvHm4z/4k0b+f/cufux9fAoB9Hvki/uKY6zLAxWLLmSb1q oGkMOPrJMkpeu789D1bGRJj9VJI5c3oxlsNBYtsGDhXMiodbL+rFnRQWIZiFJKT89BaW LUZbuOKUX2T0gq269dQjnerH3/tqpy7a56j4A8Pilun3n9hzgmE5cO9V9mgMxdSYKoGy R4z2nOICMdAuARRySEcq5tYmksqZJ4TesdCFSyXD8f/W9XHqCppO0skC6jZeTPt97/9B nA3A== X-Gm-Message-State: ALoCoQnGq5ya2NWw0PTVI4LnG27VudqGu6DytC2dGKWgFeWxDR2KHaWDSwST4zx4xRvzS25gcQbZ X-Received: by 10.42.151.67 with SMTP id d3mr28819440icw.56.1421158573600; Tue, 13 Jan 2015 06:16:13 -0800 (PST) Received: from kateleycoimac.local ([63.231.252.189]) by mx.google.com with ESMTPSA id qn2sm6143985igb.10.2015.01.13.06.16.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Jan 2015 06:16:12 -0800 (PST) Message-ID: <54B528AC.9090901@kateley.com> Date: Tue, 13 Jan 2015 08:16:12 -0600 From: Linda Kateley User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: How many ram... References: <20150113105240.GA33162@pcjas.obspm.fr> In-Reply-To: <20150113105240.GA33162@pcjas.obspm.fr> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 14:23:49 -0000 Jas, Most of those rules of thumbs are not valid. ZFS doesn't really need to keep data about metadata in ram. It keeps recently used and frequently used items in cache. There are some per disk caches but by default those are pretty small. I have a blog on a group that has a 350TB archive/backup system with 32GB ram. http://kateleyco.com/?p=815 Everything is dependent on use case. If you have many users all using the same file, frequently.. that will be cached. Sizing workload helps. linda On 1/13/15 4:52 AM, Albert Shih wrote: > Hi all, > > Basic question : How many amount of ram I need for running ZFS with a pool > of ~500-600To. > > For example If I've a server with 2 disk array of 60 disk of 6To. > > Is the rule I find somewhere on Internet saying for ZFS 1To --> 1Go is > still true ? > > What's happen If I use less ram ? Like 1To --> 0.5Go ? Less performance ? crash ? > > Regards. > > JAS > > -- > Albert SHIH > DIO bâtiment 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > France > Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 > xmpp: jas@obspm.fr > Heure local/Local time: > mar 13 jan 2015 11:49:16 CET > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Linda Kateley Kateley Company Skype ID-kateleyco http://kateleyco.com From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 15:07:58 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 88870878 for ; Tue, 13 Jan 2015 15:07:58 +0000 (UTC) Received: from mail.time-domain.co.uk (host81-142-251-212.in-addr.btopenworld.com [81.142.251.212]) by mx1.freebsd.org (Postfix) with ESMTP id 1F544BBC for ; Tue, 13 Jan 2015 15:07:57 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mail.time-domain.co.uk (8.14.3/8.14.3) with ESMTP id t0DF5CNG004920; Tue, 13 Jan 2015 15:05:12 GMT Date: Tue, 13 Jan 2015 15:05:12 +0000 (GMT) From: andy thomas X-X-Sender: andy-tds@mail.time-domain.co.uk To: Albert Shih Subject: Re: How many ram... In-Reply-To: <20150113105240.GA33162@pcjas.obspm.fr> Message-ID: References: <20150113105240.GA33162@pcjas.obspm.fr> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Virus-Scanned: clamav-milter 0.97.5 at mail X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 15:07:58 -0000 On Tue, 13 Jan 2015, Albert Shih wrote: > Hi all, > > Basic question : How many amount of ram I need for running ZFS with a pool > of ~500-600To. > > For example If I've a server with 2 disk array of 60 disk of 6To. > > Is the rule I find somewhere on Internet saying for ZFS 1To --> 1Go is > still true ? I've have been running lots of HP Microservers fitted with 4 x 4 Tb disks in ZFS RAIDz1 under 8 GB of RAM for the past 3 years or so with no problems at all. I've also got a SAN with 8 GB memory in the head node and 6 servers - each with 4 x 4 TB disks, RAIDz1 and 8 GB memory - attached as iSCSI VDEVs for a usable capacity of 50 TB. Again with no problems at all. Both set-ups break most of the rules in the rule-book but... it works! Andy ---------------------------- Andy Thomas, Time Domain Systems Tel: +44 (0)7866 556626 Fax: +44 (0)20 8372 2582 http://www.time-domain.co.uk From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 15:32:51 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F41C768F for ; Tue, 13 Jan 2015 15:32:50 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AE9A1EAB for ; Tue, 13 Jan 2015 15:32:49 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1YB3SY-0003Ux-76 for freebsd-fs@freebsd.org; Tue, 13 Jan 2015 16:32:46 +0100 Received: from dynamic34-29.dynamic.dal.ca ([129.173.34.203]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 13 Jan 2015 16:32:46 +0100 Received: from jrm by dynamic34-29.dynamic.dal.ca with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 13 Jan 2015 16:32:46 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Joseph Mingrone Subject: Re: memory exhaustion on 10.1 AMD64 ZFS storage system Date: Tue, 13 Jan 2015 11:32:35 -0400 Lines: 22 Message-ID: <86egqyvgq4.fsf@gly.ftfl.ca> References: <868uh7ydqy.fsf@gly.ftfl.ca> <20150112211148.15c1fe3c@kerio.tuxis.nl> Mime-Version: 1.0 Content-Type: text/plain X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: dynamic34-29.dynamic.dal.ca User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (berkeley-unix) Cancel-Lock: sha1:SaVuGz69UKMHaiULXm1OTNQjdkg= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 15:32:51 -0000 "Mark Schouten" writes: >> We've had this storage system running 9.x without problems. After >> upgrading to 10.1 we've seen "out of swap space" messages in the logs. >> >> Dec 13 04:29:12 storage2 kernel: pid 723 (rpc.statd), uid 0, was killed: >> out of swap space > > Do you have compression enabled and L2ARC? There's this bug that leaks > memory via L2ARC, causing all memory to run out, pushing out the ARC > and (probably) causes crashes and lots of swap usage.. Indeed we do. > I think this is about that bug: > https://github.com/freebsd/freebsd/commit/b98f85d480b770e34d5e08c66dbc668bd5548bdc It looks this was from November 6th, 2014 and it says "MFC after: 2 weeks", so I'll try moving to 10-STABLE. Thanks, Joseph From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 15:40:06 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 57F8679 for ; Tue, 13 Jan 2015 15:40:06 +0000 (UTC) Received: from plane.gmane.org (plane.gmane.org [80.91.229.3]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 12F93F1C for ; Tue, 13 Jan 2015 15:40:05 +0000 (UTC) Received: from list by plane.gmane.org with local (Exim 4.69) (envelope-from ) id 1YB3Zb-0006hY-8i for freebsd-fs@freebsd.org; Tue, 13 Jan 2015 16:40:03 +0100 Received: from dynamic34-29.dynamic.dal.ca ([129.173.34.203]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 13 Jan 2015 16:40:03 +0100 Received: from jrm by dynamic34-29.dynamic.dal.ca with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Tue, 13 Jan 2015 16:40:03 +0100 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Joseph Mingrone Subject: Re: memory exhaustion on 10.1 AMD64 ZFS storage system Date: Tue, 13 Jan 2015 11:35:05 -0400 Lines: 18 Message-ID: <868uh6vgly.fsf@gly.ftfl.ca> References: <20150112211148.15c1fe3c@kerio.tuxis.nl> <520c2cb181266d1f28d00dbf7929d591@mailbox.ijs.si> Mime-Version: 1.0 Content-Type: text/plain X-Complaints-To: usenet@ger.gmane.org X-Gmane-NNTP-Posting-Host: dynamic34-29.dynamic.dal.ca User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4 (berkeley-unix) Cancel-Lock: sha1:pDUjP4QaOs/3gJDdOfy+o05RtOg= X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 15:40:06 -0000 Mark Martinec writes: > Aggressiveness of ARC in 10.0 hurt us pretty badly when switching from > 9.2 to 10.0. The ARC / UMA greediness for memory was causing excessive > swapping out of still active processes, while keeping ARC luxuriously > bathing in memory. The situation in 10.1 may have been slightly improved, > although it seems the fix still has not been committed: > > Bug 187594 - [zfs] [patch] ZFS ARC behavior problem and fix > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594 > > I'd really like to see a solution committed in time for 10.2. I've set vfs.zfs.arc_max=24G. That leaves 8 GB for everything else. Hopefully that helps. Thanks, Joseph From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 22:30:57 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 860B8FE8; Tue, 13 Jan 2015 22:30:57 +0000 (UTC) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2994D816; Tue, 13 Jan 2015 22:30:56 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.9/8.14.9) with ESMTP id t0DMUsQx061186; Tue, 13 Jan 2015 17:30:54 -0500 (EST) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.9/8.14.4/Submit) id t0DMUs2c061183; Tue, 13 Jan 2015 17:30:54 -0500 (EST) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21685.40094.453028.585630@hergotha.csail.mit.edu> Date: Tue, 13 Jan 2015 17:30:54 -0500 From: Garrett Wollman To: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Some filesystem performance numbers X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Tue, 13 Jan 2015 17:30:55 -0500 (EST) X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED, HEADER_FROM_DIFFERENT_DOMAINS autolearn=disabled version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on hergotha.csail.mit.edu X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 22:30:57 -0000 I recently bought a copy of the SPECsfs2014 benchmark, and I've been using it to test out our NFS server platform. One scenario of interest to me is identifying where the limits are in terms of the local CAM/storage/filesystem implementation versus bottlenecks unique to the NFS server, and to that end I've been running the benchmark suite directly on the server's local disk. (This is of course also the way you'd benchmark for shared-nothing container-based virtualization.) I have found a few interesting results on my test platform: 1) I can quantify the cost of using SHA256 vs. fletcher4 as the ZFS checksum algorithm. On the VDA workload (essentially a simulated video streaming/recording application), my server can do about half as many "streams" with SHA256 as it can with fletcher4. 2) Both L2ARC and separate ZIL have small but measurable performance impacts. I haven't examined the differences closely. 3) LZ4 compression also makes a small performance impact, but as advertised, less than LZJB for mostly-incompressible data. I hope to be able to present the actual benchmark results at some point, as well as some results for the other three workloads. -GAWollman From owner-freebsd-fs@FreeBSD.ORG Tue Jan 13 23:10:29 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6318F91D for ; Tue, 13 Jan 2015 23:10:29 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 25B02B5D for ; Tue, 13 Jan 2015 23:10:28 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Aq4FALCktVSDaFve/2dsb2JhbABbg1hYBIMBwwsKhSdKAoFZAQEBAQF9hA0BAQQBAQEgKyALBRYOCgICDRkCKQEJJgYIBwQBHASICw27eJQKAQEBAQEBAQEBAQEBAQEBAQEBAQEBF4EhjgcBARs0B4JogUEFiWaIIYMqg12QPSKEDCAxB4EFOX4BAQE X-IronPort-AV: E=Sophos;i="5.07,752,1413259200"; d="scan'208";a="185738243" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 13 Jan 2015 18:10:22 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id E440F3CE1D; Tue, 13 Jan 2015 18:10:21 -0500 (EST) Date: Tue, 13 Jan 2015 18:10:21 -0500 (EST) From: Rick Macklem To: Ryan Stone Message-ID: <207092761.12602230.1421190621924.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: mountd -h flag is not accepting hostnames MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Jan 2015 23:10:29 -0000 Ryan Stone wrote: > It seems that there has been a regression between 8.1-RELEASE and > 10.1-RELEASE in mountd. In 10.1 I can no longer run mountd with -h > myhostname to have it bind to the IP that "myhostname" resolve to. > The cause is that getaddrinfo() is not being called correctly. I've > uploaded a fix for review here: > > https://reviews.freebsd.org/D1507 I just commented on this. The patch looks correct to me and similar patches are needed for usr.sbin/rpc.lockd/lockd.c and usr.sbin/rpc.statd/statd.c. (I was the culprit and introduced this bug via r222623 and friends when I "fixed" the code to set AI_NUMERICHOST instead of clear it when a numeric name was detected.) rick > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 09:16:37 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ED9D5780 for ; Wed, 14 Jan 2015 09:16:36 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5027F16E for ; Wed, 14 Jan 2015 09:16:35 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id 60CE43AA7; Wed, 14 Jan 2015 09:08:33 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id lertYh7C7L3P; Wed, 14 Jan 2015 09:08:24 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id 08FB83A9B; Wed, 14 Jan 2015 09:08:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1421226504; bh=HxcZm1xwE1wrfxcR5tHIMcmLYopK3EtrAKFASbWfVNM=; h=Date:From:Subject:To:Cc:In-Reply-To:References; b=lLA6D3RVV3n1t+/95AXAx1PHr3DAwQrXEVXLmVL/hX6URcZu12qnlvjEYpiC5Z6gJ aMxv0hXdLrhAJDKLxyS3MB/uxF+UHHa/gFmStMdETT6Xk3kaXPazX207Dbe14+fT3Z A/wWZrRSYrC2usT5L1RCQar8T2tWbtneBPawzcxg= Mime-Version: 1.0 Date: Wed, 14 Jan 2015 09:08:23 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <4426a915c891b0faf962f1975c4dee1e@mail.unix-experience.fr> X-Mailer: RainLoop/1.7.1.215 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: Re: High Kernel Load with nfsv4 To: "Rick Macklem" In-Reply-To: <1566336890.7368425.1420586936307.JavaMail.root@uoguelph.ca> References: <1566336890.7368425.1420586936307.JavaMail.root@uoguelph.ca> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 09:16:37 -0000 Hi Rick,=0Ayou fix doesn't help, i'm sorry. It improves the NFS server sp= eed, but don't remove the problem.=0A=0AI think have loader.conf variable= s for there options could be great.=0A=0AI have wrote a cpp bench for tes= ting my NFS server configuration but i can't reproduce the problem wherea= s the bench let many file descriptors opened (randomly).=0A=0ARegards,=0A= =0ALo=C3=AFc Blot,=0AUNIX Systems, Network and Security Engineer=0Ahttp:/= /www.unix-experience.fr=0A=0A7 janvier 2015 00:29 "Rick Macklem" a =C3=A9crit: =0A> Loic Blot wrote:=0A> =0A>> Hi Rick,=0A>= > =0A>> i saw that some people has issues with igb cards with NFS=0A>> Fo= r example:=0A>> http://freebsd.1045724.n5.nabble.com/NFS-over-LAGG-lacp-p= oor-performance-td5906349.html=0A>> =0A>> can my problem be related ? I u= se igb with default queue number. Here=0A>> are my vmstat -i outputs=0A> = =0A> I have no idea. Maybe someone familiar with this will respond?=0A> = =0A> I do think that the large # of NFSv4 Opens (which are actually a for= m of lock)=0A> could be a factor. The client and server has to search tho= se lists for a match=0A> for many NFSv4 operations, including all reads/w= rites.=0A> =0A> On the server side, the default hash table sizes are very= small. This is in=0A> part that I did testing on 256Mbyte i386 systems, = so that the values were=0A> safe for such a machine.=0A> I'd suggest you = increase the following on the server's kernel.=0A> In sys/fs/nfs/nfs.h:= =0A> NFSSTATEHASHSIZE - This one is in every client header, so if you hav= e a large#=0A> of clients, you don't want to increase it too much. Howeve= r=0A> for a fairly large server handling not too many clients, I'd=0A> tr= y something like 1000 instead of 10.=0A> (I just tried 100 on the small i= 386 laptop I have handy and it=0A> seemed ok for a small test.)=0A> NFSLO= CKHASHSIZE - This one is a single global table, so I'd bump it way up,=0A= > 20000 maybe?=0A> In sys/fs/nfsport.h:=0A> NFSRV_V4STATELIMIT - The comm= ent notes that the default of 500000 seems safe for=0A> a 256Mbyte i386, = so I'd bump it to something like 2000000=0A> for your case.=0A> You will = have to rebuild a kernel from sources after editing these values and=0A> = boot it on the server. Maybe these should become tunables so building a k= ernel=0A> isn't necessary?=0A> =0A> I looked and there isn't much that ca= n be done in the client. At this point,=0A> the open_owners and opens are= single lists for a client (a mount point on a=0A> client machine for Fre= eBSD). If you post what you get for "nfsstat -e -c"=0A> on a typical clie= nt in your setup, that would tell me if it is the open_owners=0A> (which = I suspect) or opens that will be a long list. (I would have to code=0A> a= patch to make either of these a hash table instead of a single linked=0A= > list. I should do this. It was on my to-do list, but got forgotten.;-)= =0A> =0A> rick=0A> =0A>> Server side:=0A>> =0A>> interrupt total rate=0A>= > irq1: atkbd0 18 0=0A>> irq20: ehci1 2790134 2=0A>> irq21: ehci0 2547642= 2=0A>> cpu0:timer 36299188 35=0A>> irq264: ciss0 6352476 6=0A>> irq265: = igb0:que 0 2716692 2=0A>> irq266: igb0:que 1 32205278 31=0A>> irq267: igb= 0:que 2 38395109 37=0A>> irq268: igb0:que 3 1413468 1=0A>> irq269: igb0:q= ue 4 39207930 38=0A>> irq270: igb0:que 5 1622715 1=0A>> irq271: igb0:que = 6 1634676 1=0A>> irq272: igb0:que 7 1190123 1=0A>> irq273: igb0:link 2 0= =0A>> cpu1:timer 14074423 13=0A>> cpu8:timer 12204739 11=0A>> cpu9:timer = 11384192 11=0A>> cpu3:timer 10461566 10=0A>> cpu4:timer 12785103 12=0A>> = cpu6:timer 10739344 10=0A>> cpu5:timer 10978294 10=0A>> cpu7:timer 105997= 05 10=0A>> cpu2:timer 13998891 13=0A>> cpu10:timer 11602361 11=0A>> cpu11= :timer 11568523 11=0A>> Total 296772592 290=0A>> =0A>> And client side:= =0A>> interrupt total rate=0A>> irq9: acpi0 4 0=0A>> irq22: ehci1 950519 = 2=0A>> irq23: ehci0 1865060 4=0A>> cpu0:timer 248128035 546=0A>> irq268: = mfi0 406896 0=0A>> irq269: igb0:que 0 2510556 5=0A>> irq270: igb0:que 1 2= 825336 6=0A>> irq271: igb0:que 2 2092958 4=0A>> irq272: igb0:que 3 196084= 9 4=0A>> irq273: igb0:que 4 2645369 5=0A>> irq274: igb0:que 5 2735187 6= =0A>> irq275: igb0:que 6 2290531 5=0A>> irq276: igb0:que 7 2384370 5=0A>>= irq277: igb0:link 2 0=0A>> irq287: igb2:que 0 1465051 3=0A>> irq288: igb= 2:que 1 856381 1=0A>> irq289: igb2:que 2 809318 1=0A>> irq290: igb2:que 3= 897154 1=0A>> irq291: igb2:que 4 875755 1=0A>> irq292: igb2:que 5 358661= 17 78=0A>> irq293: igb2:que 6 846517 1=0A>> irq294: igb2:que 7 857979 1= =0A>> irq295: igb2:link 2 0=0A>> irq296: igb3:que 0 535212 1=0A>> irq297:= igb3:que 1 454359 1=0A>> irq298: igb3:que 2 454142 1=0A>> irq299: igb3:q= ue 3 454623 1=0A>> irq300: igb3:que 4 456297 1=0A>> irq301: igb3:que 5 45= 5482 1=0A>> irq302: igb3:que 6 456128 1=0A>> irq303: igb3:que 7 454680 1= =0A>> irq304: igb3:link 3 0=0A>> irq305: ahci0 75 0=0A>> cpu1:timer 25723= 3702 566=0A>> cpu13:timer 255603184 562=0A>> cpu7:timer 258492826 569=0A>= > cpu12:timer 255819351 563=0A>> cpu6:timer 258493465 569=0A>> cpu15:time= r 254694003 560=0A>> cpu3:timer 258171320 568=0A>> cpu22:timer 256506877 = 564=0A>> cpu5:timer 253401435 558=0A>> cpu16:timer 255412360 562=0A>> cpu= 11:timer 257318013 566=0A>> cpu20:timer 253648060 558=0A>> cpu2:timer 257= 864543 567=0A>> cpu17:timer 261828899 576=0A>> cpu9:timer 257497326 567= =0A>> cpu18:timer 258451190 569=0A>> cpu8:timer 257784504 567=0A>> cpu14:= timer 254923723 561=0A>> cpu10:timer 257265498 566=0A>> cpu19:timer 25877= 5946 569=0A>> cpu4:timer 256368658 564=0A>> cpu23:timer 255050534 561=0A>= > cpu21:timer 257663842 567=0A>> Total 6225260206 13710=0A>> =0A>> Please= note igb2 on client side is the dedicated link for NFSv4=0A>> =0A>> Rega= rds,=0A>> =0A>> Lo=C3=AFc Blot,=0A>> UNIX Systems, Network and Security E= ngineer=0A>> http://www.unix-experience.fr=0A>> =0A>> 6 janvier 2015 04:1= 7 "Rick Macklem" a =C3=A9crit:=0A>>> Loic Blot wro= te:=0A>>> =0A>>>> Hi Rick,=0A>>>> nfsstat -e -s don't show usefull datas = on server.=0A>>> =0A>>> Well, as far as I know, it returns valid informat= ion.=0A>>> (See below.)=0A>>> =0A>>>> Server Info:=0A>>>> Getattr Setattr= Lookup Readlink Read Write Create=0A>>>> Remove=0A>>>> 26935254 16911 57= 55728 302 2334920 3673866 0=0A>>>> 328332=0A>>>> Rename Link Symlink Mkdi= r Rmdir Readdir RdirPlus=0A>>>> Access=0A>>>> 77980 28 0 0 3 8900 3=0A>>>= > 1806052=0A>>>> Mknod Fsstat Fsinfo PathConf Commit LookupP SetClId=0A>>= >> SetClIdCf=0A>>>> 1 1095 0 0 614377 8172 8=0A>>>> 8=0A>>>> Open OpenAtt= r OpenDwnGr OpenCfrm DelePurge DeleRet GetFH=0A>>>> Lock=0A>>>> 1595299 0= 44145 1495 0 0 5197490=0A>>>> 635015=0A>>>> LockT LockU Close Verify NVe= rify PutFH PutPubFH=0A>>>> PutRootFH=0A>>>> 0 614919 1270938 0 0 22688676= 0=0A>>>> 5=0A>>>> Renew RestoreFH SaveFH Secinfo RelLckOwn V4Create=0A>>= >> 42104 197606 275820 0 143 4578=0A>>>> Server:=0A>>>> Retfailed Faults = Clients=0A>>>> 0 0 6=0A>>>> OpenOwner Opens LockOwner Locks Delegs=0A>>>>= 32335 145448 204 181 0=0A>>> =0A>>> Well, 145448 Opens are a lot of Open= files. Each of these uses=0A>>> a kernel malloc'd data structure that is= linked into multiple=0A>>> linked lists.=0A>>> =0A>>> The question is..w= hy aren't these Opens being closed?=0A>>> Since FreeBSD does I/O on an mm= ap'd file after closing it,=0A>>> the FreeBSD NFSv4 client is forced to d= elay doing Close RPCs=0A>>> until the vnode is VOP_INACTIVE()/VOP_RECLAIM= ()'d. (The=0A>>> VOP_RECLAIM() case is needed, since VOP_INACTIVE() isn't= =0A>>> guaranteed to be called.)=0A>>> =0A>>> Since there were about 1.5 = million Opens and 1.27 million=0A>>> Closes, it does appear that Opens ar= e being Closed.=0A>>> Now, I'm not sure I would have imagined 1.5million = file Opens=0A>>> in a few days. My guess is this is the bottleneck.=0A>>>= =0A>>> I'd suggest that you do:=0A>>> # nfsstat -e -c=0A>>> on each of t= he NFSv4 clients and see how many Opens/client=0A>>> there are. I vaguely= remember an upper limit in the client,=0A>>> but can't remember what it = is set to.=0A>>> --> I suspect the client Open/Lock limit needs to be inc= reased.=0A>>> (I can't remember if the server also has a limit, but I=0A>= >> think it does.)=0A>>> Then the size of the hash tables used to search = the Opens=0A>>> may also need to be increased a lot.=0A>>> =0A>>> Also, I= 'd suggest you take a look at whatever apps. are=0A>>> running on the cli= ent(s) and try to figure out why they=0A>>> are Opening so many files?=0A= >>> =0A>>> My guess is that the client(s) are gettig bogged down by all= =0A>>> these Opens.=0A>>> =0A>>>> Server Cache Stats:=0A>>>> Inprog Idem = Non-idem Misses CacheSize TCPPeak=0A>>>> 0 0 1 15082947 60 16522=0A>>>> = =0A>>>> Only GetAttr and Lookup increase and it's only every 4-5 seconds= =0A>>>> and=0A>>>> only +2 to +5 into theses values.=0A>>>> =0A>>>> Now o= n client, if i take four processes stack i got=0A>>>> =0A>>>> PID TID COM= M TDNAME KSTACK=0A>>>> 63170 102547 mv - mi_switch+0xe1=0A>>>> turnstile_= wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65=0A>>>> nfs_lookup+0x= 3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4=0A>>>> vn_open_cred+0x21= d kern_openat+0x26f amd64_syscall+0x351=0A>>>> Xfast_syscall+0xfb=0A>>>> = =0A>>>> Another mv:=0A>>>> 63140 101738 mv - mi_switch+0xe1=0A>>>> turnst= ile_wait+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65=0A>>>> nfs_looku= p+0x3d0 VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4=0A>>>> kern_statat_v= nhook+0xae sys_lstat+0x30 amd64_syscall+0x351=0A>>>> Xfast_syscall+0xfb= =0A>>>> =0A>>>> 62070 102170 sendmail - mi_switch+0xe1=0A>>>> sleepq_time= dwait+0x3a _sleep+0x26e clnt_vc_call+0x666=0A>>>> clnt_reconnect_call+0x4= fa newnfs_request+0xa8c nfscl_request+0x72=0A>>>> nfsrpc_lookup+0x1fb nfs= _lookup+0x508 VOP_LOOKUP_APV+0xa1=0A>>>> lookup+0x59c namei+0x4d4 kern_st= atat_vnhook+0xae sys_lstat+0x30=0A>>>> amd64_syscall+0x351 Xfast_syscall+= 0xfb=0A>>>> =0A>>>> 63200 100930 mv - mi_switch+0xe1=0A>>>> turnstile_wai= t+0x42a __mtx_lock_sleep+0x253 nfscl_nodeleg+0x65=0A>>>> nfs_lookup+0x3d0= VOP_LOOKUP_APV+0xa1 lookup+0x59c namei+0x4d4=0A>>>> kern_statat_vnhook+0= xae sys_lstat+0x30 amd64_syscall+0x351=0A>>>> Xfast_syscall+0xfb=0A>>> = =0A>>> The above simply says that thread 102710 is waiting for a Lookup= =0A>>> reply from the server and the other 3 are waiting for the mutex=0A= >>> that protects the state structures in the client. (I suspect=0A>>> so= me other thread in the client is wading through the Open list,=0A>>> if a= single client has a lot of these 145K Opens.)=0A>>> =0A>>>> When client = is in this state, server was doing nothing special=0A>>>> (procstat -kk)= =0A>>>> =0A>>>> PID TID COMM TDNAME KSTACK=0A>>>> 895 100538 nfsd nfsd: m= aster mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_timedwait_si= g+0x10=0A>>>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1 svc_run+= 0x1de=0A>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>> a= md64_syscall+0x351 Xfast_syscall+0xfb=0A>>>> 895 100568 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv= _wait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 895 100569 nfsd nfsd: service = mi_switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_w= ait_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exi= t+0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 895 100570 nfsd nfsd: service mi= _switch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wai= t_sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>> fork_trampoline+0xe=0A>>>> 895 100571 nfsd nfsd: service mi_s= witch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_= sig+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x= 9a=0A>>>> fork_trampoline+0xe=0A>>>> 895 100572 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_si= g+0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100573 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100575 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100576 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100577 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100578 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100579 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100580 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100581 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100582 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100583 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100584 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100585 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100586 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100587 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100588 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100589 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100590 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100592 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100593 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100594 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100595 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100596 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100597 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100598 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100599 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100600 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100602 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100603 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100604 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100605 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100606 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100607 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100608 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100609 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100610 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100611 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100612 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100613 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100614 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100615 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100617 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100618 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100619 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100621 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100622 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100623 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100624 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100625 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100626 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100627 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100628 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100629 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100630 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100631 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100632 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100633 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100634 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100635 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100636 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100638 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100639 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100640 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100641 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100642 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100643 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100644 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100645 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100646 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100647 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100648 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100649 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100651 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100652 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100653 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100654 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100655 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100656 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100657 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100658 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100659 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100661 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100662 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100684 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100685 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100686 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100797 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100798 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100799 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100800 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> 895 100801 nfsd nfsd: service mi_switc= h+0xe1=0A>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+= 0x16a=0A>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>> fork_trampoline+0xe=0A>>>> =0A>>>> I really think it's a client s= ide problem, maybe a lookup problem.=0A>>>> =0A>>>> Regards,=0A>>>> =0A>>= >> Lo=C3=AFc Blot,=0A>>>> UNIX Systems, Network and Security Engineer=0A>= >>> http://www.unix-experience.fr=0A>>>> =0A>>>> 5 janvier 2015 14:35 "Ri= ck Macklem" a=0A>>>> =C3=A9crit:=0A>>>>> Loic Blot= wrote:=0A>>>>> =0A>>>>>> Hi,=0A>>>>>> happy new year Rick and @freebsd-f= s.=0A>>>>>> =0A>>>>>> After some days, i looked my NFSv4.1 mount. At serv= er start it=0A>>>>>> was=0A>>>>>> calm, but after 4 days, here is the top= stat...=0A>>>>>> =0A>>>>>> CPU: 0.0% user, 0.0% nice, 100% system, 0.0% = interrupt, 0.0%=0A>>>>>> idle=0A>>>>>> =0A>>>>>> Definitively i think it'= s a problem on client side. What can i=0A>>>>>> look=0A>>>>>> into runnin= g kernel to resolve this issue ?=0A>>>>> =0A>>>>> Well, I'd start with:= =0A>>>>> # nfsstat -e -s=0A>>>>> - run repeatedly on the server (once eve= ry N seconds in a loop).=0A>>>>> Then look at the output, comparing the c= ounts and see which RPCs=0A>>>>> are being performed by the client(s). Yo= u are looking for which=0A>>>>> RPCs are being done a lot. (If one RPC is= almost 100% of the=0A>>>>> load,=0A>>>>> then it might be a client/cachi= ng issue for whatever that RPC is=0A>>>>> doing.)=0A>>>>> =0A>>>>> Also l= ook at the Open/Lock counts near the end of the output.=0A>>>>> If the # = of Opens/Locks is large, it may be possible to reduce=0A>>>>> the=0A>>>>>= CPU overheads by using larger hash tables.=0A>>>>> =0A>>>>> Then you nee= d to profile the server kernel to see where the CPU=0A>>>>> is being used= .=0A>>>>> Hopefully someone else can fill you in on how to do that, becau= se=0A>>>>> I'll admit I don't know how to.=0A>>>>> Basically you are look= ing to see if the CPU is being used in=0A>>>>> the NFS server code or ZFS= .=0A>>>>> =0A>>>>> Good luck with it, rick=0A>>>>> =0A>>>>>> Regards,=0A>= >>>>> =0A>>>>>> Lo=C3=AFc Blot,=0A>>>>>> UNIX Systems, Network and Securi= ty Engineer=0A>>>>>> http://www.unix-experience.fr=0A>>>>>> =0A>>>>>> 30 = d=C3=A9cembre 2014 16:16 "Lo=C3=AFc Blot"=0A>>>>>> =0A>>>>>> a=0A>>>>>> =C3=A9crit:=0A>>>>>>> Hi Rick,=0A>>>>>>> i u= pgraded my jail host from FreeBSD 9.3 to 10.1 to use NFS=0A>>>>>>> v4.1= =0A>>>>>>> (mountoptions:=0A>>>>>>> rw,rsize=3D32768,wsize=3D32768,tcp,nf= sv4,minorversion=3D1)=0A>>>>>>> =0A>>>>>>> Performance is quite stable bu= t it's slow. Not as slow as=0A>>>>>>> before=0A>>>>>>> but slow... servic= es was launched=0A>>>>>>> but no client are using them and system CPU % w= as 10-50%.=0A>>>>>>> =0A>>>>>>> I don't see anything on NFSv4.1 server, i= t's perfectly stable=0A>>>>>>> and=0A>>>>>>> functionnal.=0A>>>>>>> =0A>>= >>>>> Regards,=0A>>>>>>> =0A>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>> UNIX System= s, Network and Security Engineer=0A>>>>>>> http://www.unix-experience.fr= =0A>>>>>>> =0A>>>>>>> 23 d=C3=A9cembre 2014 00:20 "Rick Macklem" a=0A>>>>>>> =C3=A9crit:=0A>>>>>>> =0A>>>>>>>> Loic Blot wr= ote:=0A>>>>>>>> =0A>>>>>>>>> Hi,=0A>>>>>>>>> =0A>>>>>>>>> To clarify beca= use of our exchanges. Here are the current=0A>>>>>>>>> sysctl=0A>>>>>>>>>= options for server:=0A>>>>>>>>> =0A>>>>>>>>> vfs.nfsd.enable_nobodycheck= =3D0=0A>>>>>>>>> vfs.nfsd.enable_nogroupcheck=3D0=0A>>>>>>>>> =0A>>>>>>>>= > vfs.nfsd.maxthreads=3D200=0A>>>>>>>>> vfs.nfsd.tcphighwater=3D10000=0A>= >>>>>>>> vfs.nfsd.tcpcachetimeo=3D300=0A>>>>>>>>> vfs.nfsd.server_min_nfs= vers=3D4=0A>>>>>>>>> =0A>>>>>>>>> kern.maxvnodes=3D10000000=0A>>>>>>>>> k= ern.ipc.maxsockbuf=3D4194304=0A>>>>>>>>> net.inet.tcp.sendbuf_max=3D41943= 04=0A>>>>>>>>> net.inet.tcp.recvbuf_max=3D4194304=0A>>>>>>>>> =0A>>>>>>>>= > vfs.lookup_shared=3D0=0A>>>>>>>>> =0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A= >>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network and Security = Engineer=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>= >> 22 d=C3=A9cembre 2014 09:42 "Lo=C3=AFc Blot"=0A>>>>>>>>> =0A>>>>>>>>> a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>= >>>>>>> Hi Rick,=0A>>>>>>>>> my 5 jails runs this weekend and now i have = some stats on=0A>>>>>>>>> this=0A>>>>>>>>> monday.=0A>>>>>>>>> =0A>>>>>>>= >> Hopefully deadlock was fixed, yeah, but everything isn't good=0A>>>>>>= >>> :(=0A>>>>>>>>> =0A>>>>>>>>> On NFSv4 server (FreeBSD 10.1) system use= s 35% CPU=0A>>>>>>>>> =0A>>>>>>>>> As i can see this is because of nfsd:= =0A>>>>>>>>> =0A>>>>>>>>> 918 root 96 20 0 12352K 3372K rpcsvc 6 51.4H=0A= >>>>>>>>> 273.68% nfsd: server (nfsd)=0A>>>>>>>>> =0A>>>>>>>>> If i look = at dmesg i see:=0A>>>>>>>>> nfsd server cache flooded, try increasing=0A>= >>>>>>>> vfs.nfsd.tcphighwater=0A>>>>>>>> =0A>>>>>>>> Well, you have a co= uple of choices:=0A>>>>>>>> 1 - Use NFSv4.1 (add "minorversion=3D1" to yo= ur mount options).=0A>>>>>>>> (NFSv4.1 avoids use of the DRC and instead = uses something=0A>>>>>>>> called sessions. See below.)=0A>>>>>>>> OR=0A>>= >>>>>> =0A>>>>>>>>> vfs.nfsd.tcphighwater was set to 10000, i increase it= to=0A>>>>>>>>> 15000=0A>>>>>>>> =0A>>>>>>>> 2 - Bump vfs.nfsd.tcphighwat= er way up, until you no longer see=0A>>>>>>>> "nfs server cache flooded" = messages. (I think Garrett Wollman=0A>>>>>>>> uses=0A>>>>>>>> 100000. (Yo= u may still see quite a bit of CPU overheads.)=0A>>>>>>>> =0A>>>>>>>> OR= =0A>>>>>>>> =0A>>>>>>>> 3 - Set vfs.nfsd.cachetcp=3D0 (which disables the= DRC and gets=0A>>>>>>>> rid=0A>>>>>>>> of the CPU overheads). However, t= here is a risk of data=0A>>>>>>>> corruption=0A>>>>>>>> if you have a cli= ent->server network partitioning of a=0A>>>>>>>> moderate=0A>>>>>>>> dura= tion, because a non-idempotent RPC may get redone, becasue=0A>>>>>>>> the= client times out waiting for a reply. If a non-idempotent=0A>>>>>>>> RPC= gets done twice on the server, data corruption can happen.=0A>>>>>>>> (T= he DRC provides improved correctness, but does add=0A>>>>>>>> overhead.)= =0A>>>>>>>> =0A>>>>>>>> If #1 works for you, it is the preferred solution= , since=0A>>>>>>>> Sessions=0A>>>>>>>> in NFSv4.1 solves the correctness = problem in a good, space=0A>>>>>>>> bound=0A>>>>>>>> way. A session basic= ally has N (usually 32 or 64) slots and=0A>>>>>>>> only=0A>>>>>>>> allows= one outstanding RPC/slot. As such, it can cache the=0A>>>>>>>> previous= =0A>>>>>>>> reply for each slot (32 or 64 of them) and guarantee "exactly= =0A>>>>>>>> once"=0A>>>>>>>> RPC semantics.=0A>>>>>>>> =0A>>>>>>>> rick= =0A>>>>>>>> =0A>>>>>>>>> Here is 'nfsstat -s' output:=0A>>>>>>>>> =0A>>>>= >>>>> Server Info:=0A>>>>>>>>> Getattr Setattr Lookup Readlink Read Write= Create=0A>>>>>>>>> Remove=0A>>>>>>>>> 12600652 1812 2501097 156 1386423 = 1983729 123=0A>>>>>>>>> 162067=0A>>>>>>>>> Rename Link Symlink Mkdir Rmdi= r Readdir RdirPlus=0A>>>>>>>>> Access=0A>>>>>>>>> 36762 9 0 0 0 3147 0=0A= >>>>>>>>> 623524=0A>>>>>>>>> Mknod Fsstat Fsinfo PathConf Commit=0A>>>>>>= >>> 0 0 0 0 328117=0A>>>>>>>>> Server Ret-Failed=0A>>>>>>>>> 0=0A>>>>>>>>= > Server Faults=0A>>>>>>>>> 0=0A>>>>>>>>> Server Cache Stats:=0A>>>>>>>>>= Inprog Idem Non-idem Misses=0A>>>>>>>>> 0 0 0 12635512=0A>>>>>>>>> Serve= r Write Gathering:=0A>>>>>>>>> WriteOps WriteRPC Opsaved=0A>>>>>>>>> 1983= 729 1983729 0=0A>>>>>>>>> =0A>>>>>>>>> And here is 'procstat -kk' for nfs= d (server)=0A>>>>>>>>> =0A>>>>>>>>> 918 100528 nfsd nfsd: master mi_switc= h+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_timedwait_sig+0x10=0A= >>>>>>>>> _cv_timedwait_sig_sbt+0x18b svc_run_internal+0x4a1=0A>>>>>>>>> = svc_run+0x1de=0A>>>>>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+= 0x9c=0A>>>>>>>>> amd64_syscall+0x351 Xfast_syscall+0xfb=0A>>>>>>>>> 918 1= 00568 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+= 0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_r= un_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tr= ampoline+0xe=0A>>>>>>>>> 918 100569 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wa= it_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100570 nfsd nf= sd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0= x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 918 100571 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100572 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 918 100573 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 918 100574 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100575 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 918 100576 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100577 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 918 100579 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100= 580 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 918 100581 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100582 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100584 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918= 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 918 100586 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100587 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100589 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 918 100590 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 918 100591 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100592= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100594 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 918 100595 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100596 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0597 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tra= mpoline+0xe=0A>>>>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100599 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 918 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100601 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 918 100602 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 918 100603 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100604 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 918 100605 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100606 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 918 100608 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100= 609 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100611 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 918 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100613 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918= 100614 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 918 100615 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100616 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 918 100617 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100618 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 918 100620 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100621= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 918 100622 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100623 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100625 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0626 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tra= mpoline+0xe=0A>>>>>>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100628 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 918 100629 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100630 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 918 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 918 100632 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100633 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100635 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 918 100636 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 918 100637 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100= 638 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 918 100639 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100640 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 918 100641 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100642 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918= 100643 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 918 100644 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100645 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100647 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 918 100648 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 918 100649 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100650= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 918 100651 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100652 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100654 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0655 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tra= mpoline+0xe=0A>>>>>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100657 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 918 100658 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100659 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 918 100660 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 918 100661 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 918 100662 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> ---=0A>>>>>>>>> =0A>>>>>>>>> Now if we look at client = (FreeBSD 9.3)=0A>>>>>>>>> =0A>>>>>>>>> We see system was very busy and do= many and many interrupts=0A>>>>>>>>> =0A>>>>>>>>> CPU: 0.0% user, 0.0% n= ice, 37.8% system, 51.2% interrupt,=0A>>>>>>>>> 11.0%=0A>>>>>>>>> idle=0A= >>>>>>>>> =0A>>>>>>>>> A look at process list shows that there are many s= endmail=0A>>>>>>>>> process=0A>>>>>>>>> in=0A>>>>>>>>> state nfstry=0A>>>= >>>>>> =0A>>>>>>>>> nfstry 18 32:27 0.88% sendmail: Queue runner@00:30:00= for=0A>>>>>>>>> /var/spool/clientm=0A>>>>>>>>> =0A>>>>>>>>> Here is 'nfs= stat -c' output:=0A>>>>>>>>> =0A>>>>>>>>> Client Info:=0A>>>>>>>>> Rpc Co= unts:=0A>>>>>>>>> Getattr Setattr Lookup Readlink Read Write Create=0A>>>= >>>>>> Remove=0A>>>>>>>>> 1051347 1724 2494481 118 903902 1901285 162676= =0A>>>>>>>>> 161899=0A>>>>>>>>> Rename Link Symlink Mkdir Rmdir Readdir R= dirPlus=0A>>>>>>>>> Access=0A>>>>>>>>> 36744 2 0 114 40 3131 0=0A>>>>>>>>= > 544136=0A>>>>>>>>> Mknod Fsstat Fsinfo PathConf Commit=0A>>>>>>>>> 9 0 = 0 0 245821=0A>>>>>>>>> Rpc Info:=0A>>>>>>>>> TimedOut Invalid X Replies R= etries Requests=0A>>>>>>>>> 0 0 0 0 8356557=0A>>>>>>>>> Cache Info:=0A>>>= >>>>>> Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits=0A>>>= >>>>>> Misses=0A>>>>>>>>> 108754455 491475 54229224 2437229 46814561 8217= 23 5132123=0A>>>>>>>>> 1871871=0A>>>>>>>>> BioRLHits Misses BioD Hits Mis= ses DirE Hits Misses Accs Hits=0A>>>>>>>>> Misses=0A>>>>>>>>> 144035 118 = 53736 2753 27813 1 57238839=0A>>>>>>>>> 544205=0A>>>>>>>>> =0A>>>>>>>>> I= f you need more things, tell me, i let the PoC in this=0A>>>>>>>>> state.= =0A>>>>>>>>> =0A>>>>>>>>> Thanks=0A>>>>>>>>> =0A>>>>>>>>> Regards,=0A>>>>= >>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network and = Security Engineer=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>> = =0A>>>>>>>>> 21 d=C3=A9cembre 2014 01:33 "Rick Macklem" =0A>>>>>>>>> a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Loic= Blot wrote:=0A>>>>>>>>> =0A>>>>>>>>>> Hi Rick,=0A>>>>>>>>>> ok, i don't = need locallocks, i haven't understand option was=0A>>>>>>>>>> for=0A>>>>>= >>>>> that=0A>>>>>>>>>> usage, i removed it.=0A>>>>>>>>>> I do more tests= on monday.=0A>>>>>>>>>> Thanks for the deadlock fix, for other people :)= =0A>>>>>>>>> =0A>>>>>>>>> Good. Please let us know if running with=0A>>>>= >>>>> vfs.nfsd.enable_locallocks=3D0=0A>>>>>>>>> gets rid of the deadlock= s? (I think it fixes the one you=0A>>>>>>>>> saw.)=0A>>>>>>>>> =0A>>>>>>>= >> On the performance side, you might also want to try different=0A>>>>>>= >>> values=0A>>>>>>>>> of=0A>>>>>>>>> readahead, if the Linux client has = such a mount option. (With=0A>>>>>>>>> the=0A>>>>>>>>> NFSv4-ZFS sequenti= al vs random I/O heuristic, I have no idea=0A>>>>>>>>> what=0A>>>>>>>>> t= he=0A>>>>>>>>> optimal readahead value would be.)=0A>>>>>>>>> =0A>>>>>>>>= > Good luck with it and please let us know how it goes, rick=0A>>>>>>>>> = ps: I now have a patch to fix the deadlock when=0A>>>>>>>>> vfs.nfsd.enab= le_locallocks=3D1=0A>>>>>>>>> is set. I'll post it for anyone who is inte= rested after I put=0A>>>>>>>>> it=0A>>>>>>>>> through some testing.=0A>>>= >>>>>> =0A>>>>>>>>> --=0A>>>>>>>>> Best regards,=0A>>>>>>>>> Lo=C3=AFc BL= OT,=0A>>>>>>>>> UNIX systems, security and network engineer=0A>>>>>>>>> h= ttp://www.unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>>> Le jeudi 18 d=C3=A9= cembre 2014 =C3=A0 19:46 -0500, Rick Macklem a =C3=A9crit=0A>>>>>>>>> := =0A>>>>>>>>> =0A>>>>>>>>> Loic Blot wrote:=0A>>>>>>>>>> Hi rick,=0A>>>>>>= >>>> i tried to start a LXC container on Debian Squeeze from my=0A>>>>>>>= >>> freebsd=0A>>>>>>>>>> ZFS+NFSv4 server and i also have a deadlock on n= fsd=0A>>>>>>>>>> (vfs.lookup_shared=3D0). Deadlock procs each time i laun= ch a=0A>>>>>>>>>> squeeze=0A>>>>>>>>>> container, it seems (3 tries, 3 fa= ils).=0A>>>>>>>>> =0A>>>>>>>>> Well, I`ll take a look at this `procstat -= kk`, but the only=0A>>>>>>>>> thing=0A>>>>>>>>> I`ve seen posted w.r.t. a= voiding deadlocks in ZFS is to not=0A>>>>>>>>> use=0A>>>>>>>>> nullfs. (I= have no idea if you are using any nullfs mounts,=0A>>>>>>>>> but=0A>>>>>= >>>> if so, try getting rid of them.)=0A>>>>>>>>> =0A>>>>>>>>> Here`s a h= igh level post about the ZFS and vnode locking=0A>>>>>>>>> problem,=0A>>>= >>>>>> but there is no patch available, as far as I know.=0A>>>>>>>>> =0A= >>>>>>>>> http://docs.FreeBSD.org/cgi/mid.cgi?54739F41.8030407=0A>>>>>>>>= > =0A>>>>>>>>> rick=0A>>>>>>>>> =0A>>>>>>>>> 921 - D 0:00.02 nfsd: server= (nfsd)=0A>>>>>>>>> =0A>>>>>>>>> Here is the procstat -kk=0A>>>>>>>>> =0A= >>>>>>>>> PID TID COMM TDNAME KSTACK=0A>>>>>>>>> 921 100538 nfsd nfsd: ma= ster mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0xc9e=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>>>>> nfsvno_advlock+0x119 nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14a= d=0A>>>>>>>>> nfsrvd_locku+0x283 nfsrvd_dorpc+0xec6 nfssvc_program+0x554= =0A>>>>>>>>> svc_run_internal+0xc77 svc_run+0x1de nfsrvd_nfsd+0x1ca=0A>>>= >>>>>> nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A>>>>>>>>> 921 100572 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 921 100573 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100574 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921= 100575 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 921 100576 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100577 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 921 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100579 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 921 100580 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 921 100581 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100582= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 921 100583 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100584 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 921 100585 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100586 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 10= 0587 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tra= mpoline+0xe=0A>>>>>>>>> 921 100588 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100589 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 921 100590 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100591 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 921 100592 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 921 100593 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100594 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 921 100595 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100596 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 921 100597 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 921 100598 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100= 599 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 921 100600 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100601 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 921 100602 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100603 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921= 100604 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 921 100605 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100606 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 921 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100608 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 921 100609 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 921 100610 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100611= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 921 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100613 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 921 100614 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100615 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 10= 0616 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _slee= p+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrv_getlockfile+0x17= 9 nfsrv_lockctrl+0x21f=0A>>>>>>>>> nfsrvd_lock+0x5b1=0A>>>>>>>>> nfsrvd_d= orpc+0xec6 nfssvc_program+0x554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>= >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>= 921 100617 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_si= gnals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>>= svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> f= ork_trampoline+0xe=0A>>>>>>>>> 921 100618 nfsd nfsd: service mi_switch+0x= e1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x= 9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>>>>>>>>> svc_ru= n_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_tra= mpoline+0xe=0A>>>>>>>>> 921 100619 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100620 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 921 100621 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100622 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 921 100623 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 921 100624 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100625 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 921 100626 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100627 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 921 100628 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 921 100629 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100= 630 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 921 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100632 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 921 100633 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100634 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921= 100635 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 921 100636 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100637 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab slee= pq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_intern= al+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+= 0xe=0A>>>>>>>>> 921 100638 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x= 16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9= a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100639 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0= xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_= thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>= >> 921 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_= signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>= >> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>= fork_trampoline+0xe=0A>>>>>>>>> 921 100641 nfsd nfsd: service mi_switch+= 0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>= > _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+= 0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100642= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab = sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampol= ine+0xe=0A>>>>>>>>> 921 100643 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_si= g+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit= +0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100644 nfsd nfsd: s= ervice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_s= ig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e = svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>= >>>>>> 921 100645 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_ca= tch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>= >>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>= >>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100646 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>= >>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_st= art+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 10= 0647 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0= xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_ru= n_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tra= mpoline+0xe=0A>>>>>>>>> 921 100648 nfsd nfsd: service mi_switch+0xe1=0A>>= >>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_= exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100649 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wa= it_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x= 87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe= =0A>>>>>>>>> 921 100650 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a= =0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a= =0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100651 nfsd nfsd: servic= e mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0x= f=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_t= hread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>= > 921 100652 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_s= ignals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>= > svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> = fork_trampoline+0xe=0A>>>>>>>>> 921 100653 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>= _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0= xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100654 = nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab s= leepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_int= ernal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoli= ne+0xe=0A>>>>>>>>> 921 100655 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig= +0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100656 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_si= g+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e s= vc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>= >>>>> 921 100657 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_cat= ch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>= >>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>= >>> fork_trampoline+0xe=0A>>>>>>>>> 921 100658 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>= >>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_sta= rt+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100= 659 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0x= ab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run= _internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_tram= poline+0xe=0A>>>>>>>>> 921 100660 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait= _sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_e= xit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100661 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wai= t_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x8= 7e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A= >>>>>>>>> 921 100662 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq= _catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A= >>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>= >>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100663 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>= >>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread= _start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921= 100664 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_catch_signal= s+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>> svc= _run_internal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_= trampoline+0xe=0A>>>>>>>>> 921 100665 nfsd nfsd: service mi_switch+0xe1= =0A>>>>>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>> _c= v_wait_sig+0x16a=0A>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb = fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>> 921 100666 nfs= d nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 = nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrv_setclient+0xbd nfsrvd_se= tclientid+0x3c8=0A>>>>>>>>> nfsrvd_dorpc+0xc76=0A>>>>>>>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= > fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> =0A>>>>>>>>> Regards,=0A= >>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network = and Security Engineer=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>= >> =0A>>>>>>>>> 15 d=C3=A9cembre 2014 15:18 "Rick Macklem" =0A>>>>>>>>> a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> L= oic Blot wrote:=0A>>>>>>>>> =0A>>>>>>>>>> For more informations, here is = procstat -kk on nfsd, if you=0A>>>>>>>>>> need=0A>>>>>>>>>> more=0A>>>>>>= >>>> hot datas, tell me.=0A>>>>>>>>>> =0A>>>>>>>>>> Regards, PID TID COMM= TDNAME KSTACK=0A>>>>>>>>>> 918 100529 nfsd nfsd: master mi_switch+0xe1= =0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>= >>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_= fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorp= c+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77 svc_run+= 0x1de=0A>>>>>>>>>> nfsrvd_nfsd+0x1ca nfssvc_nfsd+0x107 sys_nfssvc+0x9c=0A= >>>>>>>>>> amd64_syscall+0x351=0A>>>>>>>>> =0A>>>>>>>>> Well, most of the= threads are stuck like this one, waiting=0A>>>>>>>>> for=0A>>>>>>>>> a= =0A>>>>>>>>> vnode=0A>>>>>>>>> lock in ZFS. All of them appear to be in z= fs_fhtovp().=0A>>>>>>>>> I`m not a ZFS guy, so I can`t help much. I`ll tr= y changing=0A>>>>>>>>> the=0A>>>>>>>>> subject line=0A>>>>>>>>> to includ= e ZFS vnode lock, so maybe the ZFS guys will take a=0A>>>>>>>>> look.=0A>= >>>>>>>> =0A>>>>>>>>> The only thing I`ve seen suggested is trying:=0A>>>= >>>>>> sysctl vfs.lookup_shared=3D0=0A>>>>>>>>> to disable shared vop_loo= kup()s. Apparently zfs_lookup()=0A>>>>>>>>> doesn`t=0A>>>>>>>>> obey the = vnode locking rules for lookup and rename, according=0A>>>>>>>>> to=0A>>>= >>>>>> the posting I saw.=0A>>>>>>>>> =0A>>>>>>>>> I`ve added a couple of= comments about the other threads=0A>>>>>>>>> below,=0A>>>>>>>>> but=0A>>= >>>>>>> they are all either waiting for an RPC request or waiting for=0A>= >>>>>>>> the=0A>>>>>>>>> threads stuck on the ZFS vnode lock to complete.= =0A>>>>>>>>> =0A>>>>>>>>> rick=0A>>>>>>>>> =0A>>>>>>>>>> 918 100564 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A>>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>> fork_trampolin= e+0xe=0A>>>>>>>>> =0A>>>>>>>>> Fyi, this thread is just waiting for an RP= C to arrive.=0A>>>>>>>>> (Normal)=0A>>>>>>>>> =0A>>>>>>>>>> 918 100565 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_catch_signals+0xab sl= eepq_wait_sig+0xf=0A>>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>>> svc_run_in= ternal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>> fork_trampo= line+0xe=0A>>>>>>>>>> 918 100566 nfsd nfsd: service mi_switch+0xe1=0A>>>>= >>>>>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>> _cv_wai= t_sig+0x16a=0A>>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork= _exit+0x9a=0A>>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>>> 918 100567 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_catch_signals+0xab sleep= q_wait_sig+0xf=0A>>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>>> svc_run_inter= nal+0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>> fork_trampolin= e+0xe=0A>>>>>>>>>> 918 100568 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>= >>> sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>> _cv_wait_s= ig+0x16a=0A>>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_ex= it+0x9a=0A>>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>>> 918 100569 nfsd nfs= d: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_catch_signals+0xab sleepq_w= ait_sig+0xf=0A>>>>>>>>>> _cv_wait_sig+0x16a=0A>>>>>>>>>> svc_run_internal= +0x87e svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>>> fork_trampoline+0= xe=0A>>>>>>>>>> 918 100570 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>>= sleepq_catch_signals+0xab sleepq_wait_sig+0xf=0A>>>>>>>>>> _cv_wait_sig+= 0x16a=0A>>>>>>>>>> svc_run_internal+0x87e svc_thread_start+0xb fork_exit+= 0x9a=0A>>>>>>>>>> fork_trampoline+0xe=0A>>>>>>>>>> 918 100571 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsle= ep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x= 554=0A>>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb= fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100572 nfsd nfsd: se= rvice mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep= +0x66 nfsv4_lock+0x9b=0A>>>>>>>>>> nfsrv_setclient+0xbd nfsrvd_setclienti= d+0x3c8=0A>>>>>>>>>> nfsrvd_dorpc+0xc76=0A>>>>>>>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fo= rk_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> =0A>>>>>>>>> This one (and a= few others) are waiting for the nfsv4_lock.=0A>>>>>>>>> This=0A>>>>>>>>>= happens=0A>>>>>>>>> because other threads are stuck with RPCs in progres= s. (ie.=0A>>>>>>>>> The=0A>>>>>>>>> ones=0A>>>>>>>>> waiting on the vnode= lock in zfs_fhtovp().)=0A>>>>>>>>> For these, the RPC needs to lock out = other threads to do the=0A>>>>>>>>> operation,=0A>>>>>>>>> so it waits fo= r the nfsv4_lock() which can exclusively lock=0A>>>>>>>>> the=0A>>>>>>>>>= NFSv4=0A>>>>>>>>> data structures once all other nfsd threads complete t= heir=0A>>>>>>>>> RPCs=0A>>>>>>>>> in=0A>>>>>>>>> progress.=0A>>>>>>>>> = =0A>>>>>>>>>> 918 100573 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> s= leepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>>> = nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>>>>>>>>>> svc_run_internal+0xc= 77=0A>>>>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>>>> =0A>>>>>>>>> Same as above.=0A>>>>>>>>> =0A>>>>>>>>>> 918 10= 0574 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a slee= plk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_AP= V+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program= +0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>= >>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100575 nfsd nfsd:= service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock= +0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_= fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_= internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0= x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100576 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100577 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100578 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 579 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100580 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100581 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100582 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100583 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 584 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100585 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100586 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100587 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100588 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 589 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100590 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100591 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100592 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100593 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 594 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100595 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100596 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100597 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100598 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 599 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100600 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100601 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100602 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>>> 918 100603 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_= wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3= c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>= >>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start= +0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100= 604 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV= +0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>= >> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100605 nfsd nfsd: = service mi_switch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lock= mgr_args+0x902=0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_f= htovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_i= nternal+0xc77=0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x= 9a fork_trampoline+0xe=0A>>>>>>>>>> 918 100606 nfsd nfsd: service mi_swit= ch+0xe1=0A>>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfs= rvd_dorpc+0x917=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_tramp= oline+0xe=0A>>>>>>>>>> 918 100607 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x91= 7=0A>>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>>> s= vc_thread_start+0xb=0A>>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>> =0A>>>>>>>>> Lots more waiting for the ZFS vnode lock in zfs_fhtov= p().=0A>>>>>>>>> =0A>>>>>>>>> 918 100608 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9= b=0A>>>>>>>>> nfsrv_getlockfile+0x179 nfsrv_lockctrl+0x21f=0A>>>>>>>>> nf= srvd_lock+0x5b1=0A>>>>>>>>> nfsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>= >>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exit+= 0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100609 nfsd nfsd: service mi_swi= tch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902= =0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>= > zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd= _dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>= >>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0= xe=0A>>>>>>>>> 918 100610 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> s= leepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0xc9e=0A>>>>>>>>> vop_stdloc= k+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> nfsvno_advlock+0x119 = nfsrv_dolocal+0x84 nfsrv_lockctrl+0x14ad=0A>>>>>>>>> nfsrvd_locku+0x283 n= fsrvd_dorpc+0xec6 nfssvc_program+0x554=0A>>>>>>>>> svc_run_internal+0xc77= svc_thread_start+0xb fork_exit+0x9a=0A>>>>>>>>> fork_trampoline+0xe=0A>>= >>>>>>> 918 100611 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_w= ait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_d= orpc+0x316 nfssvc_program+0x554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>= >>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>>= 918 100612 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3= a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x= 316 nfssvc_program+0x554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> s= vc_thread_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0613 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _slee= p+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfs= svc_program+0x554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thre= ad_start+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100614 nf= sd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287= nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_pro= gram+0x554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_star= t+0xb fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100615 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x= 554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100616 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x6= 6 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>= >>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exi= t+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100617 nfsd nfsd: service mi_s= witch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4= _lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>>>>>>>>= > svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exit+0x9a = fork_trampoline+0xe=0A>>>>>>>>> 918 100618 nfsd nfsd: service mi_switch+0= xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0= x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>>>>>>>>> svc_r= un_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_tr= ampoline+0xe=0A>>>>>>>>> 918 100619 nfsd nfsd: service mi_switch+0xe1=0A>= >>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> = vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+= 0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_= thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= >> 918 100620 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>= >>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100621 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsl= eep+0x66 nfsv4_lock+0x9b=0A>>>>>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x= 554=0A>>>>>>>>> svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb f= ork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100622 nfsd nfsd: servi= ce mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_ar= gs+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A= >>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc= 8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0x= c77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_tram= poline+0xe=0A>>>>>>>>> 918 100623 nfsd nfsd: service mi_switch+0xe1=0A>>>= >>>>>> sleepq_wait+0x3a _sleep+0x287 nfsmsleep+0x66 nfsv4_lock+0x9b=0A>>>= >>>>>> nfsrvd_dorpc+0x316 nfssvc_program+0x554=0A>>>>>>>>> svc_run_intern= al+0xc77=0A>>>>>>>>> svc_thread_start+0xb fork_exit+0x9a fork_trampoline+= 0xe=0A>>>>>>>>> 918 100624 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> = sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlo= ck+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>= >>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>= >> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_sta= rt+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100= 625 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleepl= k+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0= xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+= 0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554= svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork= _exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100626 nfsd nfsd: service = mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+= 0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>= >>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 n= fsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77= =0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampol= ine+0xe=0A>>>>>>>>> 918 100627 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>= >>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_s= tdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d= =0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>= >>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread= _start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918= 100628 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sl= eeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_A= PV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fht= ovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0= x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> = fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100629 nfsd nfsd: serv= ice mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_a= rgs+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>>>>> 918 100630 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>= vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_= thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= >> 918 100631 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>= >>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100632 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>>>>>>>>> 918 100633 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>= >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fh= tovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> = svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>> 918 100634 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A= >>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100635 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_l= ock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_= internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9= a fork_trampoline+0xe=0A>>>>>>>>> 918 100636 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zf= s_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>= >>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>>>> 918 100637 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+= 0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100638= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 sv= c_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_ex= it+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100639 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>>>>>> 918 100640 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A= >>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= >>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_st= art+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0641 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100642 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc7= 7=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampo= line+0xe=0A>>>>>>>>> 918 100643 nfsd nfsd: service mi_switch+0xe1=0A>>>>>= >>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_= stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38= d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>= >>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_threa= d_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 91= 8 100644 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a s= leeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_= APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fh= tovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+= 0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>>= fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100645 nfsd nfsd: ser= vice mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_= args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43= =0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+= 0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal= +0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_t= rampoline+0xe=0A>>>>>>>>> 918 100646 nfsd nfsd: service mi_switch+0xe1=0A= >>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>>= vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp= +0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917= =0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_= thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>= >> 918 100647 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0= x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_L= OCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsv= no_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_pro= gram+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>= >>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100648 nfsd nfsd= : service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __loc= kmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+= 0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fht= ovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_inte= rnal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fo= rk_trampoline+0xe=0A>>>>>>>>> 918 100649 nfsd nfsd: service mi_switch+0xe= 1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>= >>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fh= tovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0= x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> = svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>= >>>>>> 918 100650 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wa= it+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c V= OP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> = nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc= _program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A= >>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100651 nfsd = nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d _= _lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_l= ock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd= _fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_= internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9= a fork_trampoline+0xe=0A>>>>>>>>> 918 100652 nfsd nfsd: service mi_switch= +0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>= >>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zf= s_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dor= pc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>= >>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe= =0A>>>>>>>>> 918 100653 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sle= epq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+= 0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>= >>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> = nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+= 0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100654= nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0= x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab= _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7= c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 sv= c_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_ex= it+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100655 nfsd nfsd: service mi_= switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x9= 02=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>= >>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsr= vd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc77=0A= >>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline= +0xe=0A>>>>>>>>> 918 100656 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>>= sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdl= ock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A= >>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>= >>> nfssvc_program+0x554 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_st= art+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 10= 0657 nfsd nfsd: service mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleep= lk+0x15d __lockmgr_args+0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+= 0xab _vn_lock+0x43=0A>>>>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp= +0x7c nfsd_fhtovp+0xc8 nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x55= 4 svc_run_internal+0xc77=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> for= k_exit+0x9a fork_trampoline+0xe=0A>>>>>>>>> 918 100658 nfsd nfsd: service= mi_switch+0xe1=0A>>>>>>>>> sleepq_wait+0x3a sleeplk+0x15d __lockmgr_args= +0x902=0A>>>>>>>>> vop_stdlock+0x3c VOP_LOCK1_APV+0xab _vn_lock+0x43=0A>>= >>>>>>> zfs_fhtovp+0x38d=0A>>>>>>>>> nfsvno_fhtovp+0x7c nfsd_fhtovp+0xc8 = nfsrvd_dorpc+0x917=0A>>>>>>>>> nfssvc_program+0x554 svc_run_internal+0xc7= 7=0A>>>>>>>>> svc_thread_start+0xb=0A>>>>>>>>> fork_exit+0x9a fork_trampo= line+0xe=0A>>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX System= s, Network and Security Engineer=0A>>>>>>>>> http://www.unix-experience.f= r=0A>>>>>>>>> =0A>>>>>>>>> 15 d=C3=A9cembre 2014 13:29 "Lo=C3=AFc Blot"= =0A>>>>>>>>> =0A>>>>>>>>> a=0A>>>>>>>>> =C3= =A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Hmmm...=0A>>>>>>>>> now i'm experiencin= g a deadlock.=0A>>>>>>>>> =0A>>>>>>>>> 0 918 915 0 21 0 12352 3372 zfs D = - 1:48.64 nfsd: server=0A>>>>>>>>> (nfsd)=0A>>>>>>>>> =0A>>>>>>>>> the on= ly issue was to reboot the server, but after rebooting=0A>>>>>>>>> deadlo= ck arrives a second time when i=0A>>>>>>>>> start my jails over NFS.=0A>>= >>>>>>> =0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A= >>>>>>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>>>> http://= www.unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>>> 15 d=C3=A9cembre 2014 10:= 07 "Lo=C3=AFc Blot"=0A>>>>>>>>> =0A>>>>>>>>= > a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Hi Rick,=0A>>>>>>>>>= after talking with my N+1, NFSv4 is required on our=0A>>>>>>>>> infrastr= ucture.=0A>>>>>>>>> I tried to upgrade NFSv4+ZFS=0A>>>>>>>>> server from = 9.3 to 10.1, i hope this will resolve some=0A>>>>>>>>> issues...=0A>>>>>>= >>> =0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>= >>>>> UNIX Systems, Network and Security Engineer=0A>>>>>>>>> http://www.= unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>>> 10 d=C3=A9cembre 2014 15:36 "= Lo=C3=AFc Blot"=0A>>>>>>>>> =0A>>>>>>>>> a= =0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Hi Rick,=0A>>>>>>>>> th= anks for your suggestion.=0A>>>>>>>>> For my locking bug, rpc.lockd is st= ucked in rpcrecv state on=0A>>>>>>>>> the=0A>>>>>>>>> server. kill -9 doe= sn't affect the=0A>>>>>>>>> process, it's blocked.... (State: Ds)=0A>>>>>= >>>> =0A>>>>>>>>> for the performances=0A>>>>>>>>> =0A>>>>>>>>> NFSv3: 60= Mbps=0A>>>>>>>>> NFSv4: 45Mbps=0A>>>>>>>>> Regards,=0A>>>>>>>>> =0A>>>>>>= >>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network and Security Engine= er=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>> =0A>>>>>>>>> 10 = d=C3=A9cembre 2014 13:56 "Rick Macklem" =0A>>>>>>>>= > a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Loic Blot wrote:=0A>= >>>>>>>> =0A>>>>>>>>>> Hi Rick,=0A>>>>>>>>>> I'm trying NFSv3.=0A>>>>>>>>= >> Some jails are starting very well but now i have an issue=0A>>>>>>>>>>= with=0A>>>>>>>>>> lockd=0A>>>>>>>>>> after some minutes:=0A>>>>>>>>>> = =0A>>>>>>>>>> nfs server 10.10.X.8:/jails: lockd not responding=0A>>>>>>>= >>> nfs server 10.10.X.8:/jails lockd is alive again=0A>>>>>>>>>> =0A>>>>= >>>>>> I look at mbuf, but i seems there is no problem.=0A>>>>>>>>> =0A>>= >>>>>>> Well, if you need locks to be visible across multiple=0A>>>>>>>>>= clients,=0A>>>>>>>>> then=0A>>>>>>>>> I'm afraid you are stuck with usin= g NFSv4 and the=0A>>>>>>>>> performance=0A>>>>>>>>> you=0A>>>>>>>>> get= =0A>>>>>>>>> from it. (There is no way to do file handle affinity for=0A>= >>>>>>>> NFSv4=0A>>>>>>>>> because=0A>>>>>>>>> the read and write ops are= buried in the compound RPC and=0A>>>>>>>>> not=0A>>>>>>>>> easily=0A>>>>= >>>>> recognized.)=0A>>>>>>>>> =0A>>>>>>>>> If the locks don't need to be= visible across multiple=0A>>>>>>>>> clients,=0A>>>>>>>>> I'd=0A>>>>>>>>>= suggest trying the "nolockd" option with nfsv3.=0A>>>>>>>>> =0A>>>>>>>>>= > Here is my rc.conf on server:=0A>>>>>>>>>> =0A>>>>>>>>>> nfs_server_ena= ble=3D"YES"=0A>>>>>>>>>> nfsv4_server_enable=3D"YES"=0A>>>>>>>>>> nfsuser= d_enable=3D"YES"=0A>>>>>>>>>> nfsd_server_flags=3D"-u -t -n 256"=0A>>>>>>= >>>> mountd_enable=3D"YES"=0A>>>>>>>>>> mountd_flags=3D"-r"=0A>>>>>>>>>> = nfsuserd_flags=3D"-usertimeout 0 -force 20"=0A>>>>>>>>>> rpcbind_enable= =3D"YES"=0A>>>>>>>>>> rpc_lockd_enable=3D"YES"=0A>>>>>>>>>> rpc_statd_ena= ble=3D"YES"=0A>>>>>>>>>> =0A>>>>>>>>>> Here is the client:=0A>>>>>>>>>> = =0A>>>>>>>>>> nfsuserd_enable=3D"YES"=0A>>>>>>>>>> nfsuserd_flags=3D"-use= rtimeout 0 -force 20"=0A>>>>>>>>>> nfscbd_enable=3D"YES"=0A>>>>>>>>>> rpc= _lockd_enable=3D"YES"=0A>>>>>>>>>> rpc_statd_enable=3D"YES"=0A>>>>>>>>>> = =0A>>>>>>>>>> Have you got an idea ?=0A>>>>>>>>>> =0A>>>>>>>>>> Regards,= =0A>>>>>>>>>> =0A>>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>>> UNIX Systems, Ne= twork and Security Engineer=0A>>>>>>>>>> http://www.unix-experience.fr=0A= >>>>>>>>>> =0A>>>>>>>>>> 9 d=C3=A9cembre 2014 04:31 "Rick Macklem" =0A>>>>>>>>>> a=0A>>>>>>>>>> =C3=A9crit:=0A>>>>>>>>>>> Lo= ic Blot wrote:=0A>>>>>>>>>>> =0A>>>>>>>>>>>> Hi rick,=0A>>>>>>>>>>>> =0A>= >>>>>>>>>>> I waited 3 hours (no lag at jail launch) and now I do:=0A>>>>= >>>>>>>> sysrc=0A>>>>>>>>>>>> memcached_flags=3D"-v -m 512"=0A>>>>>>>>>>>= > Command was very very slow...=0A>>>>>>>>>>>> =0A>>>>>>>>>>>> Here is a = dd over NFS:=0A>>>>>>>>>>>> =0A>>>>>>>>>>>> 601062912 bytes transferred i= n 21.060679 secs (28539579=0A>>>>>>>>>>>> bytes/sec)=0A>>>>>>>>>>> =0A>>>= >>>>>>>> Can you try the same read using an NFSv3 mount?=0A>>>>>>>>>>> (I= f it runs much faster, you have probably been bitten by=0A>>>>>>>>>>> the= =0A>>>>>>>>>>> ZFS=0A>>>>>>>>>>> "sequential vs random" read heuristic wh= ich I've been told=0A>>>>>>>>>>> things=0A>>>>>>>>>>> NFS is doing "rando= m" reads without file handle affinity.=0A>>>>>>>>>>> File=0A>>>>>>>>>>> h= andle affinity is very hard to do for NFSv4, so it isn't=0A>>>>>>>>>>> do= ne.)=0A>>>>>>>>> =0A>>>>>>>>> I was actually suggesting that you try the = "dd" over nfsv3=0A>>>>>>>>> to=0A>>>>>>>>> see=0A>>>>>>>>> how=0A>>>>>>>>= > the performance compared with nfsv4. If you do that, please=0A>>>>>>>>>= post=0A>>>>>>>>> the=0A>>>>>>>>> comparable results.=0A>>>>>>>>> =0A>>>>= >>>>> Someday I would like to try and get ZFS's sequential vs=0A>>>>>>>>>= random=0A>>>>>>>>> read=0A>>>>>>>>> heuristic modified and any info on w= hat difference in=0A>>>>>>>>> performance=0A>>>>>>>>> that=0A>>>>>>>>> mi= ght make for NFS would be useful.=0A>>>>>>>>> =0A>>>>>>>>> rick=0A>>>>>>>= >> =0A>>>>>>>>> rick=0A>>>>>>>>> =0A>>>>>>>>> This is quite slow...=0A>>>= >>>>>> =0A>>>>>>>>> You can found some nfsstat below (command isn't finis= hed=0A>>>>>>>>> yet)=0A>>>>>>>>> =0A>>>>>>>>> nfsstat -c -w 1=0A>>>>>>>>>= =0A>>>>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 0 0 0 0 0 16 0=0A>>>>>>>>> 2 0 0 0 0 0= 17 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>= 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 4 0 0 0 0 4 0= =0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 0 0 0 0 0 3 0=0A>>>= >>>>>> 0 0 0 0 0 0 3 0=0A>>>>>>>>> 37 10 0 8 0 0 14 1=0A>>>>>>>>> 18 16 0= 4 1 2 4 0=0A>>>>>>>>> 78 91 0 82 6 12 30 0=0A>>>>>>>>> 19 18 0 2 2 4 2 0= =0A>>>>>>>>> 0 0 0 0 2 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> GtAt= tr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>>>> 0 0 0 0 0 0 0= 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 = 1 0 0 0 0 1 0=0A>>>>>>>>> 4 6 0 0 6 0 3 0=0A>>>>>>>>> 2 0 0 0 0 0 0 0=0A>= >>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 1 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 = 1 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>= >> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 = 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>>>> 6 108 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A= >>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> GtAttr Lookup Rdlink Read Write Ren= ame Access Rddir=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>= >>>>>> 98 54 0 86 11 0 25 0=0A>>>>>>>>> 36 24 0 39 25 0 10 1=0A>>>>>>>>> = 67 8 0 63 63 0 41 0=0A>>>>>>>>> 34 0 0 35 34 0 0 0=0A>>>>>>>>> 75 0 0 75 = 77 0 0 0=0A>>>>>>>>> 34 0 0 35 35 0 0 0=0A>>>>>>>>> 75 0 0 74 76 0 0 0=0A= >>>>>>>>> 33 0 0 34 33 0 0 0=0A>>>>>>>>> 0 0 0 0 5 0 0 0=0A>>>>>>>>> 0 0 = 0 0 0 0 6 0=0A>>>>>>>>> 11 0 0 0 0 0 11 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>= >>>>>>>> 0 17 0 0 0 0 1 0=0A>>>>>>>>> GtAttr Lookup Rdlink Read Write Ren= ame Access Rddir=0A>>>>>>>>> 4 5 0 0 0 0 12 0=0A>>>>>>>>> 2 0 0 0 0 0 26 = 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0= 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 0 0 0 0 0 2 0=0A>>>>>>>>= > 2 0 0 0 0 0 24 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 = 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0= 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>= >>>>>>> GtAttr Lookup Rdlink Read Write Rename Access Rddir=0A>>>>>>>>> 0= 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 0 0 0 0 0 7 0=0A= >>>>>>>>> 2 1 0 0 0 0 1 0=0A>>>>>>>>> 0 0 0 0 2 0 0 0=0A>>>>>>>>> 0 0 0 0= 0 0 0 0=0A>>>>>>>>> 0 0 0 0 6 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>= >>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0= 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 = 6 0 0 0 0 3 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 2 0 0 0 0 0 0 0=0A>= >>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 = 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> GtAttr Lookup Rdlink Read= Write Rename Access Rddir=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 = 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>= >>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 71 0 0 0 0 0 0=0A>>>>>>>>> 0 1 0 0 0 0= 0 0=0A>>>>>>>>> 2 36 0 0 0 0 1 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>>= 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0= =0A>>>>>>>>> 1 0 0 0 0 0 1 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 = 0 0 0 0 0 0=0A>>>>>>>>> 79 6 0 79 79 0 2 0=0A>>>>>>>>> 25 0 0 25 26 0 6 0= =0A>>>>>>>>> 43 18 0 39 46 0 23 0=0A>>>>>>>>> 36 0 0 36 36 0 31 0=0A>>>>>= >>>> 68 1 0 66 68 0 0 0=0A>>>>>>>>> GtAttr Lookup Rdlink Read Write Renam= e Access Rddir=0A>>>>>>>>> 36 0 0 36 36 0 0 0=0A>>>>>>>>> 48 0 0 48 49 0 = 0 0=0A>>>>>>>>> 20 0 0 20 20 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>= > 3 14 0 1 0 0 11 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 0 0 0 0 0 0 0= 0=0A>>>>>>>>> 0 4 0 0 0 0 4 0=0A>>>>>>>>> 0 0 0 0 0 0 0 0=0A>>>>>>>>> 4 = 22 0 0 0 0 16 0=0A>>>>>>>>> 2 0 0 0 0 0 23 0=0A>>>>>>>>> =0A>>>>>>>>> Reg= ards,=0A>>>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, = Network and Security Engineer=0A>>>>>>>>> http://www.unix-experience.fr= =0A>>>>>>>>> =0A>>>>>>>>> 8 d=C3=A9cembre 2014 09:36 "Lo=C3=AFc Blot"=0A>= >>>>>>>> a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>= >>>>> Hi Rick,=0A>>>>>>>>>> I stopped the jails this week-end and started= it this=0A>>>>>>>>>> morning,=0A>>>>>>>>>> i'll=0A>>>>>>>>>> give you so= me stats this week.=0A>>>>>>>>>> =0A>>>>>>>>>> Here is my nfsstat -m outp= ut (with your rsize/wsize=0A>>>>>>>>>> tweaks)=0A>>>>>>>> =0A>>>>>>>> =0A= >>>>>>> =0A>>>>>> =0A>>>>> =0A>>>> =0A>>> =0A>> =0A> nfsv4,tcp,resvport,h= ard,cto,sec=3Dsys,acdirmin=3D3,acdirmax=3D60,acregmin=3D5,acregmax=3D60,n= ametimeo=3D60,negna=0A>>>>>>>> =0A>>>>>>>>> =0A>>>>>>>> =0A>>>>>>>> =0A>>= >>>>> =0A>>>>>> =0A>>>>> =0A>>>> =0A>>> =0A>> =0A> etimeo=3D60,rsize=3D32= 768,wsize=3D32768,readdirsize=3D32768,readahead=3D1,wcommitsize=3D773136,= timeout=3D120,retra=0A>>>>>>>> =0A>>>>>>>>> s=3D2147483647=0A>>>>>>>>> = =0A>>>>>>>>> On server side my disks are on a raid controller which show = a=0A>>>>>>>>> 512b=0A>>>>>>>>> volume and write performances=0A>>>>>>>>> = are very honest (dd if=3D/dev/zero of=3D/jails/test.dd bs=3D4096=0A>>>>>>= >>> count=3D100000000 =3D> 450MBps)=0A>>>>>>>>> =0A>>>>>>>>> Regards,=0A>= >>>>>>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network a= nd Security Engineer=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>= > =0A>>>>>>>>> 5 d=C3=A9cembre 2014 15:14 "Rick Macklem" a=0A>>>>>>>>> =C3=A9crit:=0A>>>>>>>>> =0A>>>>>>>>> Loic Blot wrote= :=0A>>>>>>>>> =0A>>>>>>>>> Hi,=0A>>>>>>>>> i'm trying to create a virtual= isation environment based on=0A>>>>>>>>> jails.=0A>>>>>>>>> Those jails a= re stored under a big ZFS pool on a FreeBSD=0A>>>>>>>>> 9.3=0A>>>>>>>>> w= hich=0A>>>>>>>>> export a NFSv4 volume. This NFSv4 volume was mounted on = a=0A>>>>>>>>> big=0A>>>>>>>>> hypervisor (2 Xeon E5v3 + 128GB memory and = 8 ports (but=0A>>>>>>>>> only 1=0A>>>>>>>>> was=0A>>>>>>>>> used at this = time).=0A>>>>>>>>> =0A>>>>>>>>> The problem is simple, my hypervisors run= s 6 jails (used 1%=0A>>>>>>>>> cpu=0A>>>>>>>>> and=0A>>>>>>>>> 10GB RAM a= pproximatively and less than 1MB bandwidth) and=0A>>>>>>>>> works=0A>>>>>= >>>> fine at start but the system slows down and after 2-3 days=0A>>>>>>>= >> become=0A>>>>>>>>> unusable. When i look at top command i see 80-100% = on=0A>>>>>>>>> system=0A>>>>>>>>> and=0A>>>>>>>>> commands are very very = slow. Many process are tagged with=0A>>>>>>>>> nfs_cl*.=0A>>>>>>>>> =0A>>= >>>>>>> To be honest, I would expect the slowness to be because of=0A>>>>= >>>>> slow=0A>>>>>>>>> response=0A>>>>>>>>> from the NFSv4 server, but if= you do:=0A>>>>>>>>> # ps axHl=0A>>>>>>>>> on a client when it is slow an= d post that, it would give us=0A>>>>>>>>> some=0A>>>>>>>>> more=0A>>>>>>>= >> information on where the client side processes are sitting.=0A>>>>>>>>= > If you also do something like:=0A>>>>>>>>> # nfsstat -c -w 1=0A>>>>>>>>= > and let it run for a while, that should show you how many=0A>>>>>>>>> R= PCs=0A>>>>>>>>> are=0A>>>>>>>>> being done and which ones.=0A>>>>>>>>> = =0A>>>>>>>>> # nfsstat -m=0A>>>>>>>>> will show you what your mount is ac= tually using.=0A>>>>>>>>> The only mount option I can suggest trying is= =0A>>>>>>>>> "rsize=3D32768,wsize=3D32768",=0A>>>>>>>>> since some networ= k environments have difficulties with 64K.=0A>>>>>>>>> =0A>>>>>>>>> There= are a few things you can try on the NFSv4 server side,=0A>>>>>>>>> if=0A= >>>>>>>>> it=0A>>>>>>>>> appears=0A>>>>>>>>> that the clients are generat= ing a large RPC load.=0A>>>>>>>>> - disabling the DRC cache for TCP by se= tting=0A>>>>>>>>> vfs.nfsd.cachetcp=3D0=0A>>>>>>>>> - If the server is se= eing a large write RPC load, then=0A>>>>>>>>> "sync=3Ddisabled"=0A>>>>>>>= >> might help, although it does run a risk of data loss when=0A>>>>>>>>> = the=0A>>>>>>>>> server=0A>>>>>>>>> crashes.=0A>>>>>>>>> Then there are a = couple of other ZFS related things (I'm not=0A>>>>>>>>> a=0A>>>>>>>>> ZFS= =0A>>>>>>>>> guy,=0A>>>>>>>>> but these have shown up on the mailing list= s).=0A>>>>>>>>> - make sure your volumes are 4K aligned and ashift=3D12 (= in=0A>>>>>>>>> case a=0A>>>>>>>>> drive=0A>>>>>>>>> that uses 4K sectors = is pretending to be 512byte sectored)=0A>>>>>>>>> - never run over 70-80%= full if write performance is an=0A>>>>>>>>> issue=0A>>>>>>>>> - use a zi= l on an SSD with good write performance=0A>>>>>>>>> =0A>>>>>>>>> The only= NFSv4 thing I can tell you is that it is known that=0A>>>>>>>>> ZFS's=0A= >>>>>>>>> algorithm for determining sequential vs random I/O fails for=0A= >>>>>>>>> NFSv4=0A>>>>>>>>> during writing and this can be a performance = hit. The only=0A>>>>>>>>> workaround=0A>>>>>>>>> is to use NFSv3 mounts, = since file handle affinity=0A>>>>>>>>> apparently=0A>>>>>>>>> fixes=0A>>>= >>>>>> the problem and this is only done for NFSv3.=0A>>>>>>>>> =0A>>>>>>= >>> rick=0A>>>>>>>>> =0A>>>>>>>>> I saw that there are TSO issues with ig= b then i'm trying to=0A>>>>>>>>> disable=0A>>>>>>>>> it with sysctl but t= he situation wasn't solved.=0A>>>>>>>>> =0A>>>>>>>>> Someone has got idea= s ? I can give you more informations if=0A>>>>>>>>> you=0A>>>>>>>>> need.= =0A>>>>>>>>> =0A>>>>>>>>> Thanks in advance.=0A>>>>>>>>> Regards,=0A>>>>>= >>>> =0A>>>>>>>>> Lo=C3=AFc Blot,=0A>>>>>>>>> UNIX Systems, Network and S= ecurity Engineer=0A>>>>>>>>> http://www.unix-experience.fr=0A>>>>>>>>> __= _____________________________________________=0A>>>>>>>>> freebsd-fs@free= bsd.org mailing list=0A>>>>>>>>> http://lists.freebsd.org/mailman/listinf= o/freebsd-fs=0A>>>>>>>>> To unsubscribe, send any mail to=0A>>>>>>>>> "fr= eebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>>>> =0A>>>>>>>>> ______________= _________________________________=0A>>>>>>>>> freebsd-fs@freebsd.org mail= ing list=0A>>>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs= =0A>>>>>>>>> To unsubscribe, send any mail to=0A>>>>>>>>> "freebsd-fs-uns= ubscribe@freebsd.org"=0A>>>>>>>>> =0A>>>>>>>>> __________________________= _____________________=0A>>>>>>>>> freebsd-fs@freebsd.org mailing list=0A>= >>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>>>>= To unsubscribe, send any mail to=0A>>>>>>>>> "freebsd-fs-unsubscribe@fre= ebsd.org"=0A>>>>>>>>> =0A>>>>>>>>> ______________________________________= _________=0A>>>>>>>>> freebsd-fs@freebsd.org mailing list=0A>>>>>>>>> htt= p://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>>>>> To unsubscr= ibe, send any mail to=0A>>>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A= >>>>>>>>> _______________________________________________=0A>>>>>>>>> fre= ebsd-fs@freebsd.org mailing list=0A>>>>>>>>> http://lists.freebsd.org/mai= lman/listinfo/freebsd-fs=0A>>>>>>>>> To unsubscribe, send any mail to=0A>= >>>>>>>> "freebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>>>> =0A>>>>>>>>> __= _____________________________________________=0A>>>>>>>>> freebsd-fs@free= bsd.org mailing list=0A>>>>>>>>> http://lists.freebsd.org/mailman/listinf= o/freebsd-fs=0A>>>>>>>>> To unsubscribe, send any mail to=0A>>>>>>>>> "fr= eebsd-fs-unsubscribe@freebsd.org"=0A>>>>>>> =0A>>>>>>> __________________= _____________________________=0A>>>>>>> freebsd-fs@freebsd.org mailing li= st=0A>>>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs=0A>>>>>= >> To unsubscribe, send any mail to=0A>>>>>>> "freebsd-fs-unsubscribe@fre= ebsd.org" From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 16:38:11 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 19AE9F5A for ; Wed, 14 Jan 2015 16:38:11 +0000 (UTC) Received: from kirk-ext.obspm.fr (kirk-ext.obspm.fr [145.238.193.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.obspm.fr", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id ADC579EA for ; Wed, 14 Jan 2015 16:38:10 +0000 (UTC) Received: from pcjas.obspm.fr (pcjas.obspm.fr [145.238.184.233]) (authenticated bits=0) by kirk-ext.obspm.fr (8.14.4/8.14.4/DIO Observatoire de Paris - 15/04/10) with ESMTP id t0EGc6O6021020 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 14 Jan 2015 17:38:07 +0100 Date: Wed, 14 Jan 2015 17:38:49 +0100 From: Albert Shih To: Linda Kateley Subject: Re: How many ram... Message-ID: <20150114163849.GA97640@pcjas.obspm.fr> References: <20150113105240.GA33162@pcjas.obspm.fr> <54B528AC.9090901@kateley.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <54B528AC.9090901@kateley.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.3.9 (kirk-ext.obspm.fr [145.238.193.20]); Wed, 14 Jan 2015 17:38:07 +0100 (CET) X-Virus-Scanned: clamav-milter 0.98.5 at kirk-ext.obspm.fr X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 16:38:11 -0000 Le 13/01/2015 à 08:16:12-0600, Linda Kateley a écrit > Jas, > > Most of those rules of thumbs are not valid. ZFS doesn't really need to > keep data about metadata in ram. It keeps recently used and frequently > used items in cache. There are some per disk caches but by default those > are pretty small. > > I have a blog on a group that has a 350TB archive/backup system with > 32GB ram. http://kateleyco.com/?p=815 That's very conforting ;-) That's mean I didn't need to by 1To of Ram when I go to 1Po file server. > > Everything is dependent on use case. If you have many users all using > the same file, frequently.. that will be cached. Sizing workload helps. Thanks you to everyone. Regards. JAS -- Albert SHIH DIO bâtiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex France Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 xmpp: jas@obspm.fr Heure local/Local time: mer 14 jan 2015 17:35:13 CET From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 16:57:45 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EEC936F8 for ; Wed, 14 Jan 2015 16:57:45 +0000 (UTC) Received: from mail-ie0-f169.google.com (mail-ie0-f169.google.com [209.85.223.169]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B68F4BFA for ; Wed, 14 Jan 2015 16:57:45 +0000 (UTC) Received: by mail-ie0-f169.google.com with SMTP id y20so9939546ier.0 for ; Wed, 14 Jan 2015 08:57:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:reply-to:organization :user-agent:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; bh=ibOmvctrWyC4lqCw00mgj9lCmfVmYBu2RM4q2J8xGdI=; b=dM86D0IXKHZuVxTclpayvHWcPpF/2EUKDxjCLu3HXsZitIyVBIGii90asx5W4Pwr26 +XY1OIzVysVb1qoITzV7YAsDUTI63wrCPiglmzL55uh43hE0QqQXLV2BcpjTf2UNFGMJ L+NprDhgZ5baTOaUuzCvNBB4I4jE59BLXVmvh2Dd0p0fPW3p/I8aOsQqj72Ex+0DoEN5 nxhJyOYGSXY6MdbK1K2xc4t3OVFHWjxnSDsl5ie6mvXig8c7SmA3CLvHOC97+j1g4O0k St95y2b2Uu2Obp3G4PR7hNiuqlmWVfnb6kZbqYYrUSFqDjeecLoPKDluon6buhXHTfjZ kpVA== X-Gm-Message-State: ALoCoQmiuy9o1wZecdwKVNpSYv+I7G2BIoqaEyFBWhrM52oI3MwERFuE8XHYJuE+Ayu1YHvadz9P X-Received: by 10.50.79.167 with SMTP id k7mr5390524igx.26.1421254220445; Wed, 14 Jan 2015 08:50:20 -0800 (PST) Received: from [192.168.0.18] ([63.231.252.189]) by mx.google.com with ESMTPSA id g20sm8166574igt.14.2015.01.14.08.50.19 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jan 2015 08:50:19 -0800 (PST) Message-ID: <54B69E4A.9010402@kateley.com> Date: Wed, 14 Jan 2015 10:50:18 -0600 From: Linda Kateley Reply-To: linda@kateley.com Organization: Kateley Company User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Albert Shih Subject: Re: How many ram... References: <20150113105240.GA33162@pcjas.obspm.fr> <54B528AC.9090901@kateley.com> <20150114163849.GA97640@pcjas.obspm.fr> In-Reply-To: <20150114163849.GA97640@pcjas.obspm.fr> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 16:57:46 -0000 I will say one more thing.. I also have a customer who uses zfs for security camera storage. The cameras deliver 100's of k bytes per minute... But they save the data for a very very very long time. That kind of system would need very little ram(maybe 8GB) but lots and lots of disk. lk On 1/14/15 10:38 AM, Albert Shih wrote: > Le 13/01/2015 à 08:16:12-0600, Linda Kateley a écrit >> Jas, >> >> Most of those rules of thumbs are not valid. ZFS doesn't really need to >> keep data about metadata in ram. It keeps recently used and frequently >> used items in cache. There are some per disk caches but by default those >> are pretty small. >> >> I have a blog on a group that has a 350TB archive/backup system with >> 32GB ram. http://kateleyco.com/?p=815 > That's very conforting ;-) That's mean I didn't need to by 1To of Ram when > I go to 1Po file server. > >> Everything is dependent on use case. If you have many users all using >> the same file, frequently.. that will be cached. Sizing workload helps. > Thanks you to everyone. > > Regards. > > JAS > > -- > Albert SHIH > DIO bâtiment 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > France > Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 > xmpp: jas@obspm.fr > Heure local/Local time: > mer 14 jan 2015 17:35:13 CET From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 17:00:11 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id B0682794 for ; Wed, 14 Jan 2015 17:00:11 +0000 (UTC) Received: from kirk-ext.obspm.fr (kirk-ext.obspm.fr [145.238.193.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.obspm.fr", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4DFAFC16 for ; Wed, 14 Jan 2015 17:00:10 +0000 (UTC) Received: from pcjas.obspm.fr (pcjas.obspm.fr [145.238.184.233]) (authenticated bits=0) by kirk-ext.obspm.fr (8.14.4/8.14.4/DIO Observatoire de Paris - 15/04/10) with ESMTP id t0EH07LV020290 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 14 Jan 2015 18:00:08 +0100 Date: Wed, 14 Jan 2015 18:00:51 +0100 From: Albert Shih To: linda@kateley.com Subject: Re: How many ram... Message-ID: <20150114170051.GB97640@pcjas.obspm.fr> References: <20150113105240.GA33162@pcjas.obspm.fr> <54B528AC.9090901@kateley.com> <20150114163849.GA97640@pcjas.obspm.fr> <54B69E4A.9010402@kateley.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <54B69E4A.9010402@kateley.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.3.9 (kirk-ext.obspm.fr [145.238.193.20]); Wed, 14 Jan 2015 18:00:08 +0100 (CET) X-Virus-Scanned: clamav-milter 0.98.5 at kirk-ext.obspm.fr X-Virus-Status: Clean Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 17:00:11 -0000 Le 14/01/2015 à 10:50:18-0600, Linda Kateley a écrit > I will say one more thing.. I also have a customer who uses zfs for > security camera storage. The cameras deliver 100's of k bytes per > minute... But they save the data for a very very very long time. That > kind of system would need very little ram(maybe 8GB) but lots and lots > of disk. Thanks you very much. May I ask you something (feel free to not answer of course ). I saw on your > >> 32GB ram. http://kateleyco.com/?p=815 you have install I quote «hey will have 252 4TB drives in 6 45-drive chassis with multiple controllers» do you have any idea how many pool they have ? how many disk they put in one raid ? how many raid they put in one pool ? Actually I've one server a very big pool (I known some tell me it's too big) with 72 disks in 6 raidz2. Thanks. Regards. JAS -- Albert SHIH DIO bâtiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex France Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 xmpp: jas@obspm.fr Heure local/Local time: mer 14 jan 2015 17:54:33 CET From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 17:28:35 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 05E60CC6 for ; Wed, 14 Jan 2015 17:28:35 +0000 (UTC) Received: from mail-oi0-x22e.google.com (mail-oi0-x22e.google.com [IPv6:2607:f8b0:4003:c06::22e]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B9169F00 for ; Wed, 14 Jan 2015 17:28:34 +0000 (UTC) Received: by mail-oi0-f46.google.com with SMTP id a3so8385868oib.5 for ; Wed, 14 Jan 2015 09:28:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=DUo8jUq6PmonmKJ3RhJt3pbFTUXves40HOxE7AwEo2s=; b=e79VtMcjLxae2nHgCgPVsEeZgdFgQM5omUuSMEcSsV4xet1jTpiM7uvqhx/tWFbaKP bp20SS9XNLWS5RWAKbtWgtBw+ZvVVdYoOlTE1pzji0JFv1u60VkMx9fQpSMn5l3/NgFV FIksgn3uQKTNMChULZwmJZWlhhxmnK2bUR6r6ZGVkMg0zbzxv9Mha4MZ6jDJgQJ2YERa Pi5xFGyQ3CvzMlEGGNjGlYUiywMCmAbz6gGXOJqBONZVWHs8WN6k3+HH51Rhz8f62LJl c9f5OpLL1GeWi9vUtP1ddntfK6DDbuvvQHvRTgjHLM7vHGeJxuuCipCbsdabZu1b0ml2 UbBA== MIME-Version: 1.0 X-Received: by 10.202.54.86 with SMTP id d83mr2951186oia.55.1421256514203; Wed, 14 Jan 2015 09:28:34 -0800 (PST) Received: by 10.202.76.71 with HTTP; Wed, 14 Jan 2015 09:28:34 -0800 (PST) In-Reply-To: <20150114170051.GB97640@pcjas.obspm.fr> References: <20150113105240.GA33162@pcjas.obspm.fr> <54B528AC.9090901@kateley.com> <20150114163849.GA97640@pcjas.obspm.fr> <54B69E4A.9010402@kateley.com> <20150114170051.GB97640@pcjas.obspm.fr> Date: Wed, 14 Jan 2015 09:28:34 -0800 Message-ID: Subject: Re: How many ram... From: Freddie Cash To: Albert Shih Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 17:28:35 -0000 On Wed, Jan 14, 2015 at 9:00 AM, Albert Shih wrote: > Le 14/01/2015 =C3=A0 10:50:18-0600, Linda Kateley a =C3=A9crit > > I will say one more thing.. I also have a customer who uses zfs for > > security camera storage. The cameras deliver 100's of k bytes per > > minute... But they save the data for a very very very long time. That > > kind of system would need very little ram(maybe 8GB) but lots and lots > > of disk. > > Thanks you very much. > > May I ask you something (feel free to not answer of course ). I saw on yo= ur > > > >> 32GB ram. http://kateleyco.com/?p=3D815 > > you have install I quote > > =C2=ABhey will have 252 4TB drives in 6 45-drive chassis with multiple > controllers=C2=BB > > do you have any idea how many pool they have ? how many disk they put in > one raid ? how many raid they put in one pool ? > > Actually I've one server a very big pool (I known some tell me it's too > big) with 72 disks in 6 raidz2. > > =E2=80=8BWe have two storage systems with 90 harddrives each (2 TB drives= ). These are setup with 2 45-drive 4U JBOD chassis each, and a 2U head unit with SSDs for the OS and L2ARC/ZIL, and the SATA controllers. The way the hardware is configured, they can each handle another 2 JBOD chassis without daisy-chaining anything. =E2=80=8BUsing 6-disk raidz2 vdevs, for a total of 15 raidz2 vdevs per stor= age pool.=E2=80=8B =E2=80=8BThese are backup systems and an off-site replication system, so st= orage throughput and IOps wasn't super critical, while storage space and manageability were. They only have gigabit NICs, and can saturate those while running backups or "zfs send" / "zfs recv". They do have 128 GB of RAM, though, more as a "get it now while it's cheap instead of waiting until we need it" than anything. And they have dual 8-core AMD Opteron CPUs (again, more as a "they're inexpensive now, so get as much as we can afford" than any real need for it). One of them has dedupe enabled (yeah, yeah, we know, we're moving away from it, it's actually the last one with it enabled), and actually does use the RAM for DDT storage in the ARC. The other one doesn't have dedupe enable, and most of the RAM sits "idle". =E2=80=8B --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 18:03:13 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 465C57FD for ; Wed, 14 Jan 2015 18:03:13 +0000 (UTC) Received: from Exch2-3.slu.se (webmail.slu.se [77.235.224.123]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "webmail.slu.se", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 973B53D4 for ; Wed, 14 Jan 2015 18:03:12 +0000 (UTC) Received: from exch2-4.slu.se (77.235.224.124) by Exch2-3.slu.se (77.235.224.123) with Microsoft SMTP Server (TLS) id 15.0.995.29; Wed, 14 Jan 2015 18:48:00 +0100 Received: from exch2-4.slu.se ([::1]) by exch2-4.slu.se ([fe80::4173:e97d:6ba9:312b%23]) with mapi id 15.00.0995.028; Wed, 14 Jan 2015 18:47:59 +0100 From: =?utf-8?B?S2FybGkgU2rDtmJlcmc=?= To: Freddie Cash Subject: Re: How many ram... Thread-Topic: How many ram... Thread-Index: AQHQMCI8w7MlzEXaJk2wFwifPECHHA== Date: Wed, 14 Jan 2015 17:47:59 +0000 Message-ID: <6a3129720b4a439994841c28df676cd1@exch2-4.slu.se> Accept-Language: sv-SE, en-US Content-Language: sv-SE X-MS-Has-Attach: X-MS-TNEF-Correlator: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 18:03:13 -0000 DQpEZW4gMTQgamFuIDIwMTUgMTg6Mjggc2tyZXYgRnJlZGRpZSBDYXNoIDxmandjYXNoQGdtYWls LmNvbT46DQo+DQo+IE9uIFdlZCwgSmFuIDE0LCAyMDE1IGF0IDk6MDAgQU0sIEFsYmVydCBTaGlo IDxBbGJlcnQuU2hpaEBvYnNwbS5mcj4gd3JvdGU6DQo+DQo+ID4gIExlIDE0LzAxLzIwMTUgw6Ag MTA6NTA6MTgtMDYwMCwgTGluZGEgS2F0ZWxleSBhIMOpY3JpdA0KPiA+ID4gSSB3aWxsIHNheSBv bmUgbW9yZSB0aGluZy4uIEkgYWxzbyBoYXZlIGEgY3VzdG9tZXIgd2hvIHVzZXMgemZzIGZvcg0K PiA+ID4gc2VjdXJpdHkgY2FtZXJhIHN0b3JhZ2UuIFRoZSBjYW1lcmFzIGRlbGl2ZXIgMTAwJ3Mg b2YgayBieXRlcyBwZXINCj4gPiA+IG1pbnV0ZS4uLiBCdXQgdGhleSBzYXZlIHRoZSBkYXRhIGZv ciBhIHZlcnkgdmVyeSB2ZXJ5IGxvbmcgdGltZS4gVGhhdA0KPiA+ID4ga2luZCBvZiBzeXN0ZW0g d291bGQgbmVlZCB2ZXJ5IGxpdHRsZSByYW0obWF5YmUgOEdCKSBidXQgbG90cyBhbmQgbG90cw0K PiA+ID4gb2YgZGlzay4NCj4gPg0KPiA+IFRoYW5rcyB5b3UgdmVyeSBtdWNoLg0KPiA+DQo+ID4g TWF5IEkgYXNrIHlvdSBzb21ldGhpbmcgKGZlZWwgZnJlZSB0byBub3QgYW5zd2VyIG9mIGNvdXJz ZSApLiBJIHNhdyBvbiB5b3VyDQo+ID4NCj4gPiA+ID4+IDMyR0IgcmFtLiBodHRwOi8va2F0ZWxl eWNvLmNvbS8/cD04MTUNCj4gPg0KPiA+IHlvdSBoYXZlIGluc3RhbGwgSSBxdW90ZQ0KPiA+DQo+ ID4gICDCq2hleSB3aWxsIGhhdmUgMjUyIDRUQiBkcml2ZXMgaW4gNiA0NS1kcml2ZSBjaGFzc2lz IHdpdGggbXVsdGlwbGUNCj4gPiAgIGNvbnRyb2xsZXJzwrsNCj4gPg0KPiA+IGRvIHlvdSBoYXZl IGFueSBpZGVhIGhvdyBtYW55IHBvb2wgdGhleSBoYXZlID8gaG93IG1hbnkgZGlzayB0aGV5IHB1 dCBpbg0KPiA+IG9uZSByYWlkID8gaG93IG1hbnkgcmFpZCB0aGV5IHB1dCBpbiBvbmUgcG9vbCA/ DQo+ID4NCj4gPiBBY3R1YWxseSBJJ3ZlIG9uZSBzZXJ2ZXIgYSB2ZXJ5IGJpZyBwb29sIChJIGtu b3duIHNvbWUgdGVsbCBtZSBpdCdzIHRvbw0KPiA+IGJpZykgd2l0aCA3MiBkaXNrcyBpbiA2IHJh aWR6Mi4NCj4gPg0KPiA+IOKAi1dlIGhhdmUgdHdvIHN0b3JhZ2Ugc3lzdGVtcyB3aXRoIDkwIGhh cmRkcml2ZXMgZWFjaCAoMiBUQiBkcml2ZXMpLiAgVGhlc2UNCj4gYXJlIHNldHVwIHdpdGggMiA0 NS1kcml2ZSA0VSBKQk9EIGNoYXNzaXMgZWFjaCwgYW5kIGEgMlUgaGVhZCB1bml0IHdpdGgNCj4g U1NEcyBmb3IgdGhlIE9TIGFuZCBMMkFSQy9aSUwsIGFuZCB0aGUgU0FUQSBjb250cm9sbGVycy4g IFRoZSB3YXkgdGhlDQo+IGhhcmR3YXJlIGlzIGNvbmZpZ3VyZWQsIHRoZXkgY2FuIGVhY2ggaGFu ZGxlIGFub3RoZXIgMiBKQk9EIGNoYXNzaXMgd2l0aG91dA0KPiBkYWlzeS1jaGFpbmluZyBhbnl0 aGluZy4NCj4NCj4g4oCLVXNpbmcgNi1kaXNrIHJhaWR6MiB2ZGV2cywgZm9yIGEgdG90YWwgb2Yg MTUgcmFpZHoyIHZkZXZzIHBlciBzdG9yYWdlDQo+IHBvb2wu4oCLDQo+DQo+IOKAi1RoZXNlIGFy ZSBiYWNrdXAgc3lzdGVtcyBhbmQgYW4gb2ZmLXNpdGUgcmVwbGljYXRpb24gc3lzdGVtLCBzbyBz dG9yYWdlDQo+IHRocm91Z2hwdXQgYW5kIElPcHMgd2Fzbid0IHN1cGVyIGNyaXRpY2FsLCB3aGls ZSBzdG9yYWdlIHNwYWNlIGFuZA0KPiBtYW5hZ2VhYmlsaXR5IHdlcmUuICBUaGV5IG9ubHkgaGF2 ZSBnaWdhYml0IE5JQ3MsIGFuZCBjYW4gc2F0dXJhdGUgdGhvc2UNCj4gd2hpbGUgcnVubmluZyBi YWNrdXBzIG9yICJ6ZnMgc2VuZCIgLyAiemZzIHJlY3YiLg0KPg0KPiBUaGV5IGRvIGhhdmUgMTI4 IEdCIG9mIFJBTSwgdGhvdWdoLCBtb3JlIGFzIGEgImdldCBpdCBub3cgd2hpbGUgaXQncyBjaGVh cA0KPiBpbnN0ZWFkIG9mIHdhaXRpbmcgdW50aWwgd2UgbmVlZCBpdCIgdGhhbiBhbnl0aGluZy4g IEFuZCB0aGV5IGhhdmUgZHVhbA0KPiA4LWNvcmUgQU1EIE9wdGVyb24gQ1BVcyAoYWdhaW4sIG1v cmUgYXMgYSAidGhleSdyZSBpbmV4cGVuc2l2ZSBub3csIHNvIGdldA0KPiBhcyBtdWNoIGFzIHdl IGNhbiBhZmZvcmQiIHRoYW4gYW55IHJlYWwgbmVlZCBmb3IgaXQpLg0KPg0KPiBPbmUgb2YgdGhl bSBoYXMgZGVkdXBlIGVuYWJsZWQgKHllYWgsIHllYWgsIHdlIGtub3csIHdlJ3JlIG1vdmluZyBh d2F5IGZyb20NCj4gaXQsIGl0J3MgYWN0dWFsbHkgdGhlIGxhc3Qgb25lIHdpdGggaXQgZW5hYmxl ZCksDQoNCkJ1dCB3aGF0IGFib3V0IGFsbCBvZiB0aGUgc2F2aW5ncyB5b3Ugd2VyZSBiZW5lZml0 dGluZyBmcm9tPyBXYXNuJ3QgaXQgbGlrZSAxMHggZGVkdXAgc2F2aW5ncyBvciBzb21ldGhpbmcs IEkga25vdyBJJ3ZlIGFza2VkIGJlZm9yZSBhdCB0aGUgZm9ydW1zIGJ1dCBhIHBlcnNvbiBmb3Jn ZXRzLi4uIFdoYXQncyBtYWRlIHlvdSBjaGFuZ2UgeW91ciBtaW5kPw0KDQovSyAoYS5rLmEgU2Vi dWxvbikNCg0KPiBhbmQgYWN0dWFsbHkgZG9lcyB1c2UgdGhlDQo+IFJBTSBmb3IgRERUIHN0b3Jh Z2UgaW4gdGhlIEFSQy4gIFRoZSBvdGhlciBvbmUgZG9lc24ndCBoYXZlIGRlZHVwZSBlbmFibGUs DQo+IGFuZCBtb3N0IG9mIHRoZSBSQU0gc2l0cyAiaWRsZSIuIOKAiw0KPg0KPiAtLQ0KPiBGcmVk ZGllIENhc2gNCj4gZmp3Y2FzaEBnbWFpbC5jb20NCj4gX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18NCj4gZnJlZWJzZC1mc0BmcmVlYnNkLm9yZyBtYWlsaW5n IGxpc3QNCj4gaHR0cDovL2xpc3RzLmZyZWVic2Qub3JnL21haWxtYW4vbGlzdGluZm8vZnJlZWJz ZC1mcw0KPiBUbyB1bnN1YnNjcmliZSwgc2VuZCBhbnkgbWFpbCB0byAiZnJlZWJzZC1mcy11bnN1 YnNjcmliZUBmcmVlYnNkLm9yZyINCg== From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 18:03:37 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2F82F870 for ; Wed, 14 Jan 2015 18:03:37 +0000 (UTC) Received: from mail-ob0-x233.google.com (mail-ob0-x233.google.com [IPv6:2607:f8b0:4003:c01::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E48873DD for ; Wed, 14 Jan 2015 18:03:36 +0000 (UTC) Received: by mail-ob0-f179.google.com with SMTP id nt9so9325156obb.10 for ; Wed, 14 Jan 2015 10:03:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=xKQ6lFptTB7LhAhrHdDGd9uOjfjH1jtelXo6gpCjGfc=; b=x0vrXJQv4cxuOPajlLVz14CILUpU/jwPkFlwz1jtpeE4x0F7u8TON59Jb0RAA5hu5U DpY/ish0vC264S+EyB+PiS6/MSpfvrICNDcw5xKLmALTUP6WR6cHSrjaUbXm5owl3GD3 h/E3RGBuw9qhL54gqKF+QgisRYy9Gu/bxcqy07oxlOidwLwINYh6cyhyis9f85U7/W7V pvN+keJCf540BZWBBndlYQ8xE45/e74Cw+OE4rlSSqOn0NjKJyA3ICAglIl5tkssiCKu Bc2mcsOgq+uSrNEEk5pXb/8iVmq8fM+8cVaro1Xx9nO81iNNlJFdUZD76Z5bn/IItR2t fFmg== MIME-Version: 1.0 X-Received: by 10.60.52.132 with SMTP id t4mr3264486oeo.11.1421258616161; Wed, 14 Jan 2015 10:03:36 -0800 (PST) Received: by 10.202.76.71 with HTTP; Wed, 14 Jan 2015 10:03:36 -0800 (PST) In-Reply-To: <6a3129720b4a439994841c28df676cd1@exch2-4.slu.se> References: <6a3129720b4a439994841c28df676cd1@exch2-4.slu.se> Date: Wed, 14 Jan 2015 10:03:36 -0800 Message-ID: Subject: Re: How many ram... From: Freddie Cash To: =?UTF-8?Q?Karli_Sj=C3=B6berg?= Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 18:03:37 -0000 On Wed, Jan 14, 2015 at 9:47 AM, Karli Sj=C3=B6berg = wrote: > Den 14 jan 2015 18:28 skrev Freddie Cash : > > One of them has dedupe enabled (yeah, yeah, we know, we're moving away > from > > it, it's actually the last one with it enabled), > > But what about all of the savings you were benefitting from? Wasn't it > like 10x dedup savings or something, I know I've asked before at the foru= ms > but a person forgets... What's made you change your mind? > =E2=80=8BOriginally, we were getting great disk space savings =E2=80=8Bthat= made it worthwhile (4x was our lowest, I think our highest was around 8x). However, then we started backing up our mail server with millions of tiny files ... and performance tanked (backups wouldn't complete overnight), especially when deleting old snapshots. We moved the mail server off to it's own backups box without dedupe (it's one of the multi-JBOD storage systems as 1 year of daily backups is 46 TB) and performance went back to usable. Then we started getting issues with resilvers taking 3+weeks to replace disks, monthly scrubs just barely completing before the next one starts, and running out of RAM a lot. When hardware died and killed the pool, we rebuilt it without dedupe and things are running much smoother now. We didn't lose any data as we had it replicated off-site. :) We have 4 storage systems running ZFS: - admin site backups using dedupe with 64 GB of RAM and 16 harddrives - school site backups using compression only, with 64 GB of RAM and 24 harddrives - mail server backups using compression only, with 128 GB of RAM and 90 harddrives - offsite backups storage using dedupe, with 128 GB of RAM and 90 harddrives The long-term goal is to have only the off-site backups storage system using dedupe. And to try and keep it at 90 harddrives. To help with that, we'll be getting another off-site backups storage system for the mail server backups, which will remove the bulk of the data out of the deduped pool. =E2=80=8BWhen we started with ZFS, back in the FreeBSD 7 days, 500 GB serve= r-class harddrives were around $100-200 CDN, and anything over 1 TB was out of our price range, so dedupe was worthwhile (we started with server-class drives attached to 3Ware RAID controllers). Then 2 TB desktop-class drives dropped down around the $120 CDN range and we started replacing them (and using LSI SATA controllers). And dedupe started losing it's awesomeness. Now, we get 2 TB drives in bulk for $80 CDN, so there's no point suffering through the pain points that =E2=80=8Bcome with dedupe on ZFS. --=20 Freddie Cash fjwcash@gmail.com From owner-freebsd-fs@FreeBSD.ORG Wed Jan 14 21:37:44 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3658CC6E for ; Wed, 14 Jan 2015 21:37:44 +0000 (UTC) Received: from mail-ie0-f178.google.com (mail-ie0-f178.google.com [209.85.223.178]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F3061EA1 for ; Wed, 14 Jan 2015 21:37:43 +0000 (UTC) Received: by mail-ie0-f178.google.com with SMTP id vy18so11425112iec.9 for ; Wed, 14 Jan 2015 13:37:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:reply-to:organization :user-agent:mime-version:to:cc:subject:references:in-reply-to :content-type:content-transfer-encoding; bh=PpXaUWCejJj5/55zODM/fvTfJPrMulmaHLbqx9MCFQg=; b=g2OrddZbyOKZv37VajBwKUSsh2Gvf+aQaZ3r5itgcGzZgyDk0pv7F5v8wZebdMkkio 5qkHDt2k+advOJJEjfxmbP6H9/++Ly8wzcomdpjAfDyGiXT7M8aqQjFhC8Wf2VqAzRWK iI+eg87Qp0taloyGi1wnl2DKlNapVVngBKGPSJdUgMwNtiVYdFHxRahwZBpUSrCyMTpf xfUJzq+1hbsIvmbzdLuE1JK2OfZ5zAzR2YQB1phEjXWCORimb4qz9Viau09k/vaXf/CI xHqKRkLTJoij1x8gwKBdlTM/pAElfLgjqagYLFLqtRaHTxiKKOeUoRRH97yeXyMT8+YX NTtw== X-Gm-Message-State: ALoCoQmJ80zH2slGfll5p9JJ90oyP3D7LWS36ktTHnP0iQftPkYprzPpMRB1TSScHcnB+xvwA+op X-Received: by 10.50.134.65 with SMTP id pi1mr6877478igb.32.1421271457050; Wed, 14 Jan 2015 13:37:37 -0800 (PST) Received: from [192.168.0.18] ([63.231.252.189]) by mx.google.com with ESMTPSA id f6sm2818574iof.42.2015.01.14.13.37.36 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jan 2015 13:37:36 -0800 (PST) Message-ID: <54B6E19D.9030509@kateley.com> Date: Wed, 14 Jan 2015 15:37:33 -0600 From: Linda Kateley Reply-To: linda@kateley.com Organization: Kateley Company User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: Albert Shih , linda@kateley.com Subject: Re: How many ram... References: <20150113105240.GA33162@pcjas.obspm.fr> <54B528AC.9090901@kateley.com> <20150114163849.GA97640@pcjas.obspm.fr> <54B69E4A.9010402@kateley.com> <20150114170051.GB97640@pcjas.obspm.fr> In-Reply-To: <20150114170051.GB97640@pcjas.obspm.fr> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Jan 2015 21:37:44 -0000 I teach a class on openzfs hardware architecture... and what I typically recommend is to use a raidset that is operationally sound. For example if you have a 24 disk jbod... setup as either 12 mirrors or 8 raidz1 3 disk sets 4 6 disk raidz2.. and so on.. If you plan to add storage(and storage needs always increase) and add additional 24 bay jbods.. think in terms of how you will add the sets to the pool. The common understanding of raid is a little different in zfs. ZFS will write what it has to write into what it has to write with.. This blog describes well. http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/ linda On 1/14/15 11:00 AM, Albert Shih wrote: > Le 14/01/2015 à 10:50:18-0600, Linda Kateley a écrit >> I will say one more thing.. I also have a customer who uses zfs for >> security camera storage. The cameras deliver 100's of k bytes per >> minute... But they save the data for a very very very long time. That >> kind of system would need very little ram(maybe 8GB) but lots and lots >> of disk. > Thanks you very much. > > May I ask you something (feel free to not answer of course ). I saw on your > >>>> 32GB ram. http://kateleyco.com/?p=815 > you have install I quote > > «hey will have 252 4TB drives in 6 45-drive chassis with multiple > controllers» > > do you have any idea how many pool they have ? how many disk they put in > one raid ? how many raid they put in one pool ? > > Actually I've one server a very big pool (I known some tell me it's too > big) with 72 disks in 6 raidz2. > > Thanks. > > Regards. > > JAS > -- > Albert SHIH > DIO bâtiment 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > France > Téléphone : +33 1 45 07 76 26/+33 6 86 69 95 71 > xmpp: jas@obspm.fr > Heure local/Local time: > mer 14 jan 2015 17:54:33 CET From owner-freebsd-fs@FreeBSD.ORG Thu Jan 15 07:06:01 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5CD7E654 for ; Thu, 15 Jan 2015 07:06:01 +0000 (UTC) Received: from EXCH2-1.slu.se (webmail.slu.se [77.235.224.121]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client CN "webmail.slu.se", Issuer "TERENA SSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DAC9D279 for ; Thu, 15 Jan 2015 07:06:00 +0000 (UTC) Received: from exch2-4.slu.se (77.235.224.124) by EXCH2-1.slu.se (77.235.224.121) with Microsoft SMTP Server (TLS) id 15.0.995.29; Thu, 15 Jan 2015 07:50:48 +0100 Received: from exch2-4.slu.se ([::1]) by exch2-4.slu.se ([fe80::4173:e97d:6ba9:312b%23]) with mapi id 15.00.0995.028; Thu, 15 Jan 2015 07:50:48 +0100 From: =?utf-8?B?S2FybGkgU2rDtmJlcmc=?= To: Freddie Cash Subject: Re: How many ram... Thread-Topic: How many ram... Thread-Index: AQHQMCI8w7MlzEXaJk2wFwifPECHHJy/180AgADWWYA= Date: Thu, 15 Jan 2015 06:50:47 +0000 Message-ID: <1421304647.1896.9.camel@data-b104.adm.slu.se> References: <6a3129720b4a439994841c28df676cd1@exch2-4.slu.se> In-Reply-To: Accept-Language: sv-SE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [77.235.228.32] Content-Type: text/plain; charset="utf-8" Content-ID: Content-Transfer-Encoding: base64 MIME-Version: 1.0 Cc: FreeBSD Filesystems X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Jan 2015 07:06:01 -0000 b25zIDIwMTUtMDEtMTQga2xvY2thbiAxMDowMyAtMDgwMCBza3JldiBGcmVkZGllIENhc2g6DQo+ IE9uIFdlZCwgSmFuIDE0LCAyMDE1IGF0IDk6NDcgQU0sIEthcmxpIFNqw7ZiZXJnIDxrYXJsaS5z am9iZXJnQHNsdS5zZT4NCj4gd3JvdGU6DQo+ICAgICAgICAgRGVuIDE0IGphbiAyMDE1IDE4OjI4 IHNrcmV2IEZyZWRkaWUgQ2FzaCA8Zmp3Y2FzaEBnbWFpbC5jb20+Og0KPiAgICAgICAgID4gT25l IG9mIHRoZW0gaGFzIGRlZHVwZSBlbmFibGVkICh5ZWFoLCB5ZWFoLCB3ZSBrbm93LCB3ZSdyZQ0K PiAgICAgICAgIG1vdmluZyBhd2F5IGZyb20NCj4gICAgICAgICA+IGl0LCBpdCdzIGFjdHVhbGx5 IHRoZSBsYXN0IG9uZSB3aXRoIGl0IGVuYWJsZWQpLCANCj4gICAgICAgICANCj4gICAgICAgICBC dXQgd2hhdCBhYm91dCBhbGwgb2YgdGhlIHNhdmluZ3MgeW91IHdlcmUgYmVuZWZpdHRpbmcgZnJv bT8NCj4gICAgICAgICBXYXNuJ3QgaXQgbGlrZSAxMHggZGVkdXAgc2F2aW5ncyBvciBzb21ldGhp bmcsIEkga25vdyBJJ3ZlDQo+ICAgICAgICAgYXNrZWQgYmVmb3JlIGF0IHRoZSBmb3J1bXMgYnV0 IGEgcGVyc29uIGZvcmdldHMuLi4gV2hhdCdzIG1hZGUNCj4gICAgICAgICB5b3UgY2hhbmdlIHlv dXIgbWluZD8NCj4gICAgICAgICANCj4gICAgICAgICANCj4g4oCLT3JpZ2luYWxseSwgd2Ugd2Vy ZSBnZXR0aW5nIGdyZWF0IGRpc2sgc3BhY2Ugc2F2aW5ncyDigIt0aGF0IG1hZGUgaXQNCj4gd29y dGh3aGlsZSAoNHggd2FzIG91ciBsb3dlc3QsIEkgdGhpbmsgb3VyIGhpZ2hlc3Qgd2FzIGFyb3Vu ZCA4eCkuDQo+IA0KPiANCg0KPHNuaXA+DQoNCj4gDQo+IA0KPiBUaGVuIHdlIHN0YXJ0ZWQgZ2V0 dGluZyBpc3N1ZXMgd2l0aCByZXNpbHZlcnMgdGFraW5nIDMrd2Vla3MgdG8NCj4gcmVwbGFjZSBk aXNrcywgbW9udGhseSBzY3J1YnMganVzdCBiYXJlbHkgY29tcGxldGluZyBiZWZvcmUgdGhlIG5l eHQNCj4gb25lIHN0YXJ0cywgYW5kIHJ1bm5pbmcgb3V0IG9mIFJBTSBhIGxvdC4gIFdoZW4gaGFy ZHdhcmUgZGllZCBhbmQNCj4ga2lsbGVkIHRoZSBwb29sLCB3ZSByZWJ1aWx0IGl0IHdpdGhvdXQg ZGVkdXBlIGFuZCB0aGluZ3MgYXJlIHJ1bm5pbmcNCj4gbXVjaCBzbW9vdGhlciBub3cuICBXZSBk aWRuJ3QgbG9zZSBhbnkgZGF0YSBhcyB3ZSBoYWQgaXQgcmVwbGljYXRlZA0KPiBvZmYtc2l0ZS4g IDopDQoNCkhhZCBhIGZlZWxpbmcgdGhhdCB3b3VsZCBiZWNvbWUgYSBwcm9ibGVtLCBqdXN0IG1h aW50YWluaW5nIGl0IHdpdGgNCnBlcmZvcm1hbmNlIHNvIGNyaXBwbGVkIGxpa2UgdGhhdC4NCg0K PHNuaXA+DQoNCj4gDQo+IE5vdywgd2UgZ2V0IDIgVEIgZHJpdmVzIGluIGJ1bGsgZm9yICQ4MCBD RE4sIHNvIHRoZXJlJ3Mgbm8gcG9pbnQNCj4gc3VmZmVyaW5nIHRocm91Z2ggdGhlIHBhaW4gcG9p bnRzIHRoYXQg4oCLY29tZSB3aXRoIGRlZHVwZSBvbiBaRlMuDQo+IA0KPiANCj4gLS0gDQo+IEZy ZWRkaWUgQ2FzaA0KPiBmandjYXNoQGdtYWlsLmNvbQ0KDQpUb3RhbGx5IGFncmVlLCBkaXNrIGkg Y2hlYXAsIGJ1eSBtb3JlOikgQWx0aG91Z2ggSSByZWFsbHkgd2lzaCBhbmQgaG9wZQ0KZm9yIGEg dG90YWwgbWFrZW92ZXIgb2YgZGVkdXAgaW4gWkZTIHNvbWVkYXkganVzdCBiZWNhdXNlIHRoZSBw aHlzaWNhbA0Kc3BhY2UgaXQgdGFrZXMsIG5vdCBldmVyeW9uZSBoYXMgdGhlIG1vbmV5IHRvIGFm Zm9yZCBmb290YmFsbCBmaWVsZHMgb2YNCmRhdGFjZW50ZXJzLCBhbHNvIHRvIGtlZXAgZG93biB0 aGUgZW5lcmd5IHRoYXQgdGFrZXMgdG8gZHJpdmUgYWxsIG9mIHRoZQ0KSkJPRCdzLCB0aGlua2lu ZyBpbiBlY29sb2dpY2FsIHBvaW50IG9mIHZpZXcuDQoNCkFuZCBhcyBmb3Igbm90IGRyaWZ0aW5n IHRvbyBtdWNoIG9mZiB0b3BpYywgd2UgdXNlIDI4LWJheSBTdXBlck1pY3JvDQpKQk9EJ3MgdGhh dCBhcmUgbW9zdCBlYXNpbHkgY29uZmlndXJlZCBpbiAxMC0xMC04IHJhaWR6MiB2ZGV2J3MuDQoN Ci9LDQo= From owner-freebsd-fs@FreeBSD.ORG Fri Jan 16 03:52:32 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C39FDD72 for ; Fri, 16 Jan 2015 03:52:32 +0000 (UTC) Received: from mail-lb0-x233.google.com (mail-lb0-x233.google.com [IPv6:2a00:1450:4010:c04::233]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4694FE6C for ; Fri, 16 Jan 2015 03:52:32 +0000 (UTC) Received: by mail-lb0-f179.google.com with SMTP id z11so16596197lbi.10 for ; Thu, 15 Jan 2015 19:52:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=p4XTnH4kNSp3WpaTulOSmL8UN9qnCnlHlQ6XwEvSV8k=; b=X47U4j/SA478C1o21GZcfX4d8/cbBM5BfPMKHDd+HhNZKVQ4/yp8Kt++eN9gzI6qS+ 2VkKumX9T3PE6a1S+9i6c+QUzo5yGSz3kaU5lq68JlopQQjEJ5CVs0j6uvcabW43y1op CQEc99fJPlEQLB/tw+JJFQQk2zDy41KSc5vMDU9k4gsD1UVGGhTV8w3ZnI0R0A8LMkH0 lQuLQoRhGR3u3BJUCutr7njhk+ta2uPU1Mnq8Ku3lBRyFz+Les9QuoI7+VuFbyuGJiW+ +BjLFnyiBV7hQMLUvZla/fJ/YOpq+cOWwjBorkonL6AsKbWlf+Blr02XUozrYSa0UXNv JvLw== MIME-Version: 1.0 X-Received: by 10.112.43.66 with SMTP id u2mr13466217lbl.35.1421380350394; Thu, 15 Jan 2015 19:52:30 -0800 (PST) Received: by 10.25.33.148 with HTTP; Thu, 15 Jan 2015 19:52:30 -0800 (PST) Date: Fri, 16 Jan 2015 11:52:30 +0800 Message-ID: Subject: bugfix: zpool online might fail when disk suffix start with "c[0-9]" From: Peter Xu To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jan 2015 03:52:32 -0000 Hi, all, Found one bug for libzfs that some disk could not be onlined using its physical path. I met the problem once when I try to online disk: gptid/c6cde092-504b-11e4-ba52-c45444453598 This is a partition of GPT disk, and zpool returned with the error that no such device found. I tried online it using VDEV ID, and it worked. The problem is, libzfs hacked vdev_to_nvlist_iter() to take special care for ZPOOL_CONFIG_PATH searches (also, it seems that vdev->wholedisk is used for this matter). This should be for Solaris but not Freebsd. BSD should not need these hacks at all. Fixing this bug by commenting out the hacking code path. diff --git a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c index df8317f..e16f5c6 100644 --- a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c +++ b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c @@ -1969,6 +1969,7 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, boolean_t *avail_spare, if (nvlist_lookup_string(nv, srchkey, &val) != 0) break; +#ifdef sun /* * Search for the requested value. Special cases: * @@ -2018,6 +2019,9 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, boolean_t *avail_spare, break; } } else if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { +#else + if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { +#endif char *type, *idx, *end, *p; uint64_t id, vdev_id; I am one of Freebsd user (also ZFS user). Just want to contribute something back. Hope I am posting to the write place. Peter From owner-freebsd-fs@FreeBSD.ORG Fri Jan 16 10:09:00 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D79D127B for ; Fri, 16 Jan 2015 10:09:00 +0000 (UTC) Received: from mail-wi0-f175.google.com (mail-wi0-f175.google.com [209.85.212.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6EA12999 for ; Fri, 16 Jan 2015 10:08:59 +0000 (UTC) Received: by mail-wi0-f175.google.com with SMTP id l15so2935205wiw.2 for ; Fri, 16 Jan 2015 02:08:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=iL1Mub2pSW/8ZyW+UKQ973eKd0WIwb1+PBDiU5r6rRw=; b=QgT6F5aaH/QP2igWZgQQMoNYy2LSmFFk+Y4k+enUHdqE5MUGTF1NhcGFNFyGDysebv 1u0MlxzkbTSh1C6NeuYH0EdZKrl5sN1skQlq9A/kzFeCdaT42wTKrZChHaZPisUlXhKW bkvkVW4iRuN/c4gJEGteCaELVvGD7RrlU/uAAu9RrfWrycIW/QiM6XHW12qvch/VlZ8X r7x0gbjy4lvO9+4mxRP8wOgqeMl5yh5FWWYAAhfLGSAdVd3BrP6VP5FqvBRurZl8/yjW M7kscKsqqwizwy8NT2CZpbI2LNWl6nn0G1wsluIa8RhdosZzY6u2DSqjmhub8VmqhURF mdtQ== X-Gm-Message-State: ALoCoQngoK1m826ClLyti5TOJ3Lxn/hIGvrAsAOoaJBxjRDwyqdNI/j57t/1BgpPcwDHasMGfriN X-Received: by 10.194.19.73 with SMTP id c9mr10277939wje.124.1421402937177; Fri, 16 Jan 2015 02:08:57 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id ep9sm2364860wid.3.2015.01.16.02.08.55 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jan 2015 02:08:56 -0800 (PST) Message-ID: <54B8E331.9000304@multiplay.co.uk> Date: Fri, 16 Jan 2015 10:08:49 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: bugfix: zpool online might fail when disk suffix start with "c[0-9]" References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jan 2015 10:09:01 -0000 Thanks Peter and good catch, I'll have a look at this shortly :) Regards Steve On 16/01/2015 03:52, Peter Xu wrote: > Hi, all, > > Found one bug for libzfs that some disk could not be onlined using its > physical path. I met the problem once when I try to online disk: > > gptid/c6cde092-504b-11e4-ba52-c45444453598 > > This is a partition of GPT disk, and zpool returned with the error that no > such device found. > > I tried online it using VDEV ID, and it worked. > > The problem is, libzfs hacked vdev_to_nvlist_iter() to take special care > for ZPOOL_CONFIG_PATH searches (also, it seems that vdev->wholedisk is used > for this matter). This should be for Solaris but not Freebsd. BSD should > not need these hacks at all. Fixing this bug by commenting out the hacking > code path. > > diff --git a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > index df8317f..e16f5c6 100644 > --- a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > +++ b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > @@ -1969,6 +1969,7 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, > boolean_t *avail_spare, > if (nvlist_lookup_string(nv, srchkey, &val) != 0) > break; > > +#ifdef sun > /* > * Search for the requested value. Special cases: > * > @@ -2018,6 +2019,9 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, > boolean_t *avail_spare, > break; > } > } else if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { > +#else > + if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { > +#endif > char *type, *idx, *end, *p; > uint64_t id, vdev_id; > > I am one of Freebsd user (also ZFS user). Just want to contribute something > back. Hope I am posting to the write place. > > Peter > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Jan 16 11:10:29 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE30149A for ; Fri, 16 Jan 2015 11:10:28 +0000 (UTC) Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com [74.125.82.50]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 82218FA4 for ; Fri, 16 Jan 2015 11:10:27 +0000 (UTC) Received: by mail-wg0-f50.google.com with SMTP id a1so19905170wgh.9 for ; Fri, 16 Jan 2015 03:10:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=oUB3Jiz8siXihIiBwAQ6XMuBCcK6H3VAYdgeFExaGFY=; b=PAP28EQtsYPBZtocn8F0HMOtUqLiG4BxIlPjhGwA86Bi8Qq1oGC8rLd9RA0V+AE6bJ bjRIdyNdOoqlIXNQyk4Om1/qNtTcya/sSeXzKZJpprjiS03gAlnfOfo2VVV32IIIOvho /OJu6k+b9YeeHOXw6Z6efQib9cvTK8WGKgKKrKWFpp4NsT1a6piY3gfZsSaAX8hMyMym Gx72quk3mR9ms9hQgY6523f9/QRAM+LfV6PM2fdcyKlYJ21mAllfiO60ZqGwvt9L38iD +L2tOPp75h5+nGVUzaAvIyvREYxWAXjvt/Q0yfYcM+w5GbHF7/8CL6AulWirINyud2HD fWhg== X-Gm-Message-State: ALoCoQm0iSlVmE3KGYKMdDMv2yNuRvD/2hoC39hetRh3I3pa6oFXiWVn953F8lmg+S/qpkQoERxG X-Received: by 10.194.57.84 with SMTP id g20mr28100938wjq.122.1421405164184; Fri, 16 Jan 2015 02:46:04 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id fo2sm2472454wib.10.2015.01.16.02.46.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Jan 2015 02:46:03 -0800 (PST) Message-ID: <54B8EBE4.1090804@multiplay.co.uk> Date: Fri, 16 Jan 2015 10:45:56 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: bugfix: zpool online might fail when disk suffix start with "c[0-9]" References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jan 2015 11:10:29 -0000 Thanks again, committed as: https://svnweb.freebsd.org/changeset/base/277239 On 16/01/2015 03:52, Peter Xu wrote: > Hi, all, > > Found one bug for libzfs that some disk could not be onlined using its > physical path. I met the problem once when I try to online disk: > > gptid/c6cde092-504b-11e4-ba52-c45444453598 > > This is a partition of GPT disk, and zpool returned with the error that no > such device found. > > I tried online it using VDEV ID, and it worked. > > The problem is, libzfs hacked vdev_to_nvlist_iter() to take special care > for ZPOOL_CONFIG_PATH searches (also, it seems that vdev->wholedisk is used > for this matter). This should be for Solaris but not Freebsd. BSD should > not need these hacks at all. Fixing this bug by commenting out the hacking > code path. > > diff --git a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > index df8317f..e16f5c6 100644 > --- a/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > +++ b/cddl/contrib/opensolaris/lib/libzfs/common/libzfs_pool.c > @@ -1969,6 +1969,7 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, > boolean_t *avail_spare, > if (nvlist_lookup_string(nv, srchkey, &val) != 0) > break; > > +#ifdef sun > /* > * Search for the requested value. Special cases: > * > @@ -2018,6 +2019,9 @@ vdev_to_nvlist_iter(nvlist_t *nv, nvlist_t *search, > boolean_t *avail_spare, > break; > } > } else if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { > +#else > + if (strcmp(srchkey, ZPOOL_CONFIG_TYPE) == 0 && val) { > +#endif > char *type, *idx, *end, *p; > uint64_t id, vdev_id; > > I am one of Freebsd user (also ZFS user). Just want to contribute something > back. Hope I am posting to the write place. > > Peter > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Jan 17 22:48:45 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 105F36DE for ; Sat, 17 Jan 2015 22:48:45 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id C961930B for ; Sat, 17 Jan 2015 22:48:44 +0000 (UTC) Received: from [IPv6:2001:470:923f:2:b556:32f7:518b:26b0] (unknown [IPv6:2001:470:923f:2:b556:32f7:518b:26b0]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 9488B5C002 for ; Sun, 18 Jan 2015 01:48:43 +0300 (MSK) Message-ID: <54BAE6DE.3050206@FreeBSD.org> Date: Sun, 18 Jan 2015 01:49:02 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Could mount_msdosfs be less cryptic and more compatible with fsck_msdosfs? Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 Jan 2015 22:48:45 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I could not mount FAT32 image, but "fsck_msdosfs" says, that it is valid, good FAT32 image. % sudo fsck_msdosfs /dev/md0 ** /dev/md0 ** Phase 1 - Read and Compare FATs ** Phase 2 - Check Cluster Chains ** Phase 3 - Checking Directories ** Phase 4 - Checking for Lost Files 7 files, 96256 free (188 clusters) % sudo mount -t msdosfs /dev/md0 /mnt mount_msdosfs: /dev/md0: Invalid argument % I don't know what should I do to mount this image. I'm using 10-STABLE. - -- // Lev Serebryakov AKA Black Lion -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQJ8BAEBCgBmBQJUuubeXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePIRgP/A5a2s4mFagv9cSjS4gYStfl g64/5p/qsHoguXz059JqH07GTUAvQs7Qk5X7b0OoQWeHHoO+igk8IY+WHmxVTT08 /vdxnlwNkk/q6tCcy6OuV/Zb10HvRay7lqlb7Zm1670tYfZfwWkuDz4SNl/8s/YL 9sEfaFpIhhyvYSEApyyyPWywT9zFmbGiV0uEfQYlNMGr13k98C/V8SOojH0hd56W TZIzWTDN93yMTg+9xeOYtVEDJGKicsE/U8rNSIobSWTr2dgaa1YvFXI1Z0BVwVv3 stTR9VgmSFNo3uZ5EzIQUBLoPbwV3J4ahk1gux62kcLImPq2Vjwv5l6XvjVy31N4 b4NoNBLn6Fss4bg5VSjCPpdh8Ok21WVsG9VgfDDt9DfUkOHeQufR4Kx1PsXOMQrQ xeGOnL0cQ/AqfaX1C4w2I4s2twBAIIWk//XFjK2bHRCmNtBKREJc63bEcxPKMGJb Aw9qN8CDtbVZ0Uf5zcYi9iTS7+FW4mEZXWeTpiDqjSnWxJAE8prldmX0GEmAdvzo 04subs7XvNialcRXuGtb9gWe9atDlXVU1z8GLbOAFjw9i5dAOaLEUvvEkBVJzcn9 NCW1Ykb7YbrHLKtzH784gZxo5/L2rIHvA90rUvV041aERf6G+hdFDLlKz3f+gMAS LABZqL01pjYuRJE5zTY1 =q+Q8 -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Sat Jan 17 23:04:46 2015 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E5BE4889 for ; Sat, 17 Jan 2015 23:04:46 +0000 (UTC) Received: from onlyone.friendlyhosting.spb.ru (onlyone.friendlyhosting.spb.ru [IPv6:2a01:4f8:131:60a2::2]) by mx1.freebsd.org (Postfix) with ESMTP id A77726A0 for ; Sat, 17 Jan 2015 23:04:46 +0000 (UTC) Received: from [IPv6:2001:470:923f:2:b556:32f7:518b:26b0] (unknown [IPv6:2001:470:923f:2:b556:32f7:518b:26b0]) (Authenticated sender: lev@serebryakov.spb.ru) by onlyone.friendlyhosting.spb.ru (Postfix) with ESMTPSA id 054145C002 for ; Sun, 18 Jan 2015 02:04:29 +0300 (MSK) Message-ID: <54BAEA7C.5080500@FreeBSD.org> Date: Sun, 18 Jan 2015 02:04:28 +0300 From: Lev Serebryakov Reply-To: lev@FreeBSD.org Organization: FreeBSD User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Could mount_msdosfs be less cryptic and more compatible with fsck_msdosfs? References: <54BAE6DE.3050206@FreeBSD.org> In-Reply-To: <54BAE6DE.3050206@FreeBSD.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 Jan 2015 23:04:47 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 18.01.2015 01:49, Lev Serebryakov wrote: > I don't know what should I do to mount this image. I'm using > 10-STABLE. And msdosfs.ko could not be built with MSDOSFS_DEBUG due to lot of errors, some of them in format strings (harmless) and some of them not-so-harmless, as it try to use rw_lock(). - -- // Lev Serebryakov AKA Black Lion -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQJ8BAEBCgBmBQJUuup8XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRGOTZEMUNBMEI1RjQzMThCNjc0QjMzMEFF QUIwM0M1OEJGREM0NzhGAAoJEOqwPFi/3EePo40P/0NPPagRle/C7jDeH3oSGUlU 10Nxc5D1AyYa+c2sKeAppIG8GirX4fTwbeX/XwrkLcipaY7trS6IIndzIu7S3COK 0657N7ct5kqT7Pbh+q1/waaUEwruyciLL+ZQyPFeP/yhRvHLcH3Sv3eTaoNnnYKh GlDWAxi9QpMg5rBqd9KBbOqS/P62PpAqGjGgXqmP6TPZosMPbGNdIfOAn19GvGhX 8QIrtKUInO8QZbOxgoTlrAGnmyXufeUOCcGtRe9b8U74Tita5OZSyu9K5eRTnguX enNYM3oFIuJJfETp+aKhhN6qwft2a+6OqmLEmmhyJR3y68IWMmyNMjT4OUhaUZR2 t809rPq4JDs30DCROBW3ZgTaSpyirw/KKiH43Gbu9IYUCXsbl6rrA/jYg1eI+/yb 2r60ROB/U87DvB6PVAt8n8zpOApcQBRk1AUntQYTQmSiiPj9AeIwAyIR/a2fX+GB whcxKxFaDvTW9GATMkRje6ZvhExjZCdK6H0bUvBHIMT8Ktjvpam7u+qHu5J/WXUR 00AUYn+ZGitaRhF/qmZaD3jbGIzV4e4VT3oDM0zBSn7Pus+P5tDc+2M2qUj4TT+U +wxewZjEJiFEeOhpxi2AgPL+vZreo2suUTPh2VqIaPi8hNiqhBs3D8Lo1lXvhirI fz/YCDdAdKZgdA7fv6hA =0FbC -----END PGP SIGNATURE-----