From owner-freebsd-fs@FreeBSD.ORG Sun Jun 5 00:49:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4D1AC1065670 for ; Sun, 5 Jun 2011 00:49:40 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id AFA868FC13 for ; Sun, 5 Jun 2011 00:49:39 +0000 (UTC) Received: from [IPv6:2001:470:28:140:6c98:8f17:eeb3:d87e] ([IPv6:2001:470:28:140:6c98:8f17:eeb3:d87e]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.4) with ESMTP id p550nZcl044513 for ; Sun, 5 Jun 2011 03:49:35 +0300 (EEST) (envelope-from universite@ukr.net) Message-ID: <4DEAD287.6050303@ukr.net> Date: Sun, 05 Jun 2011 03:49:11 +0300 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 CC: freebsd-fs@freebsd.org References: <4DE7C122.2040004@it4pro.pl> <4DEA7E50.3090507@ukr.net> <4DEA838B.3060404@it4pro.pl> In-Reply-To: <4DEA838B.3060404@it4pro.pl> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-94.3 required=5.0 tests=FREEMAIL_FROM,FSL_RU_URL, MISSING_HEADERS,RDNS_NONE,SPF_SOFTFAIL,T_TO_NO_BRKTS_FREEMAIL, USER_IN_WHITELIST autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [IPv6:2001:470:28:140::5]); Sun, 05 Jun 2011 03:49:38 +0300 (EEST) Subject: Re: [ZFSv28] Loader hangs, import failes, zfs filesystem unavailable. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 Jun 2011 00:49:40 -0000 04.06.2011 22:12, Bartosz Stec wrote: > > mfsbsd# zpool status -x > no pools available > > mfsbsd# zdb zroot > zdb: can't open 'zroot': No such file or directory Try running: zpool import -F zroot if not helps: zppol export zroot zpool import -fFX zroot -- Vladislav V. Prodan VVP24-UANIC +380[67]4584408 +380[99]4060508 vlad11@jabber.ru From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 00:43:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3B033106564A for ; Mon, 6 Jun 2011 00:43:08 +0000 (UTC) (envelope-from sortris@gmail.com) Received: from mail-ey0-f182.google.com (mail-ey0-f182.google.com [209.85.215.182]) by mx1.freebsd.org (Postfix) with ESMTP id B9D2C8FC0C for ; Mon, 6 Jun 2011 00:43:07 +0000 (UTC) Received: by eyg7 with SMTP id 7so1698709eyg.13 for ; Sun, 05 Jun 2011 17:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=z5qaFKdEgBnBNMndr/WxFLD6PI/ic6CB0Ej/cLGoTEk=; b=NoROJTg/PvjSJtIAlEpCl5aBdyzyPPCo3X09CGmlueuEgMZAcxG37Dv+kvjUfmjkv5 2CnvbRAS99rD3iIZsJ5mrtgWK2rivoCy0h0UGdJiF6E33yvcUoz/tqX7qxrhyp3eJP/b KfdHjDgD4zqSM1gAYOz+lJZRTk2flncDWs0fc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; b=oERhDdgxrRtTzlGRQRh8V/0DOa5zXnNvtAv9vUZRnchx7sj4UUnXBePwaiwRfQWB9i BznxFM4Xq5C9uVSSvHSNg7iDcQ7BqseSx/gWwFiGTdIWvvZrdGR2N7TUMfXduZ2/vhOk N/dO/DdzNZd+vpdRyDGpsx4VLLrY7PjZGB66M= MIME-Version: 1.0 Received: by 10.14.4.164 with SMTP id 36mr1671869eej.243.1307320986543; Sun, 05 Jun 2011 17:43:06 -0700 (PDT) Sender: sortris@gmail.com Received: by 10.14.28.136 with HTTP; Sun, 5 Jun 2011 17:43:06 -0700 (PDT) In-Reply-To: References: <1243226547.108967.1307146886997.JavaMail.root@erie.cs.uoguelph.ca> Date: Mon, 6 Jun 2011 02:43:06 +0200 X-Google-Sender-Auth: RJGPQkSfkJtteawYrecn5iR8GXk Message-ID: From: =?ISO-8859-2?Q?Tobiasz_Siemi=F1ski?= To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: NFSv4 at Diskless Station X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 00:43:08 -0000 Thank you very much for reply. I have got one more question. How can I force mounting with only NFSv4? I have added line "sysctl vfs.nfsd.server_min_nfsvers=4" to /etc/rc.d/nfsd file. But when client is trying mount, firstly it try with NFSv4(but it fails) and finally it mount with NFSv3. Best regards. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 02:59:37 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 916C0106564A for ; Mon, 6 Jun 2011 02:59:37 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id 34A608FC16 for ; Mon, 6 Jun 2011 02:59:36 +0000 (UTC) Received: (qmail 96015 invoked by uid 0); 6 Jun 2011 02:32:56 -0000 Received: from smtp.bway.net (216.220.96.25) by xena.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 6 Jun 2011 02:32:56 -0000 Received: (qmail 96010 invoked by uid 90); 6 Jun 2011 02:32:56 -0000 Received: from unknown (HELO hotlap.nat.fasttrackmonkey.com) (spork@96.57.144.66) by smtp.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 6 Jun 2011 02:32:56 -0000 Date: Sun, 5 Jun 2011 22:32:55 -0400 (EDT) From: Charles Sprickman X-X-Sender: spork@hotlap.nat.fasttrackmonkey.com To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (OSX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 02:59:37 -0000 Hello all, I've been using a few different tools to manage zfs snapshots in different scenarios. For local use, I've found that Ralf Engelschall's set of scripts[1] that tie into the periodic(8) system work fairly well. I do not use the amd portion since I am only working with zfs snapshots and I don't see a need to actually re-mount the snapshots elsewhere for recovery. The only limitation I find with this system is that for use on a backups host the lack of a monthly or yearly retention period pretty much rules it out. For local "oops" stuff though, it's great. For hosts acting as backups servers, I've been using Snapfilter[2] and some cobbled together stuff that rsyncs a bunch of hosts and tries to detect and notify on errors. Snapfilter simply is the zfs snapshot "sweeper" that periodically deletes snapshots that are outside the defined retention period(s). Since there seems to be a fair number of serious zfs users here, I was hoping for some further suggestions for use in either case. Any input is welcome... Thanks, Charles [1] - http://people.freebsd.org/~rse/snapshot/ [2] - http://www.scottlu.com/Content/Snapfilter.html From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 10:28:03 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A03EA106566C for ; Mon, 6 Jun 2011 10:28:03 +0000 (UTC) (envelope-from gallasch@free.de) Received: from smtp.free.de (smtp.free.de [91.204.6.103]) by mx1.freebsd.org (Postfix) with ESMTP id 0F08B8FC08 for ; Mon, 6 Jun 2011 10:28:02 +0000 (UTC) Received: (qmail 82094 invoked from network); 6 Jun 2011 12:01:20 +0200 Received: from smtp.free.de (HELO orwell.free.de) ([91.204.4.103]) (envelope-sender ) by smtp.free.de (qmail-ldap-1.03) with AES128-SHA encrypted SMTP for ; 6 Jun 2011 12:01:20 +0200 References: In-Reply-To: Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii Message-Id: Content-Transfer-Encoding: quoted-printable From: Kai Gallasch Date: Mon, 6 Jun 2011 12:01:19 +0200 To: Charles Sprickman X-Mailer: Apple Mail (2.1084) Cc: freebsd-fs@freebsd.org Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 10:28:03 -0000 Am 06.06.2011 um 04:32 schrieb Charles Sprickman: > I've been using a few different tools to manage zfs snapshots in = different > scenarios. For local use, I've found that Ralf Engelschall's set of > scripts[1] that tie into the periodic(8) system work fairly well. I = do > not use the amd portion since I am only working with zfs snapshots and = I > don't see a need to actually re-mount the snapshots elsewhere for > recovery. The only limitation I find with this system is that for use = on > a backups host the lack of a monthly or yearly retention period pretty > much rules it out. For local "oops" stuff though, it's great. >=20 > For hosts acting as backups servers, I've been using Snapfilter[2] and > some cobbled together stuff that rsyncs a bunch of hosts and tries to > detect and notify on errors. Snapfilter simply is the zfs snapshot > "sweeper" that periodically deletes snapshots that are outside the = defined > retention period(s). >=20 > Since there seems to be a fair number of serious zfs users here, I was > hoping for some further suggestions for use in either case. Any input = is > welcome... I'm using zetaback(FreeBSD Port: sysutils/zetaback) for some time now. For backup purposes and for a very convenient method to move zfs based = jails from one host to another.. Have a look at https://labs.omniti.com/labs/zetaback Regards, Kai. -- "I'm tryin' to think but nothin happens" - Curly From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 10:53:15 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CC8D5106566C; Mon, 6 Jun 2011 10:53:15 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:100:1043::3]) by mx1.freebsd.org (Postfix) with ESMTP id 90E678FC0A; Mon, 6 Jun 2011 10:53:15 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.1]) by mail.vx.sk (Postfix) with ESMTP id DDA09172B94; Mon, 6 Jun 2011 12:53:14 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk ([127.0.0.1]) by core.vx.sk (mail.vx.sk [127.0.0.1]) (amavisd-new, port 10024) with LMTP id vINgTtvpvdv7; Mon, 6 Jun 2011 12:53:12 +0200 (CEST) Received: from [10.0.3.160] (188-167-50-235.dynamic.chello.sk [188.167.50.235]) by mail.vx.sk (Postfix) with ESMTPSA id 9080F172B8C; Mon, 6 Jun 2011 12:53:12 +0200 (CEST) Message-ID: <4DECB197.8020102@FreeBSD.org> Date: Mon, 06 Jun 2011 12:53:11 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-stable@FreeBSD.org, freebsd-fs@FreeBSD.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Subject: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 10:53:15 -0000 Hi, I have merged ZFS version 28 to 8-STABLE (revision 222741) New major features: - data deduplication - triple parity RAIDZ (RAIDZ3) - zfs diff - zpool split - snapshot holds - zpool import -F. Allows to rewind corrupted pool to earlier transaction group - possibility to import pool in read-only mode For updating, there is a compatibility layer so that in the update phase most functionality of the new zfs binaries can be used with the old kernel module and old zfs binaries with the new kernel module. If upgrading your boot pool to version 28, please don't forget to read UPDATING and properly update your boot code. Thanks to everyone working on the ZFS port, especially to Pawel Jakub Dawidek (pjd) for doing most of the work! -- Martin Matuska FreeBSD committer http://blog.vx.sk From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 11:07:03 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A6E3F10656A6 for ; Mon, 6 Jun 2011 11:07:03 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 94F6D8FC08 for ; Mon, 6 Jun 2011 11:07:03 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p56B73Ro037607 for ; Mon, 6 Jun 2011 11:07:03 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p56B72hL037605 for freebsd-fs@FreeBSD.org; Mon, 6 Jun 2011 11:07:02 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 6 Jun 2011 11:07:02 GMT Message-Id: <201106061107.p56B72hL037605@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 11:07:03 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip f kern/157365 fs [nfs] cannot umount an nfs from dead server o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156933 fs [zfs] ZFS receive after read on readonly=on filesystem o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156168 fs [nfs] [panic] Kernel panic under concurrent access ove o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs o kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 o kern/154447 fs [zfs] [panic] Occasional panics - solaris assert somew p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153847 fs [nfs] [panic] Kernel panic from incorrect m_free in nf o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small p kern/152488 fs [tmpfs] [patch] mtime of file updated when only inode o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o kern/151845 fs [smbfs] [patch] smbfs should be upgraded to support Un o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/150207 fs zpool(1): zpool import -d /dev tries to open weird dev o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o bin/148296 fs [zfs] [loader] [patch] Very slow probe in /usr/src/sys o kern/148204 fs [nfs] UDP NFS causes overload o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147790 fs [zfs] zfs set acl(mode|inherit) fails on existing zfs o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an o bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142914 fs [zfs] ZFS performance degradation over time o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs f kern/130133 fs [panic] [zfs] 'kmem_map too small' caused by make clea o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs f kern/127375 fs [zfs] If vm.kmem_size_max>"1073741823" then write spee o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi f kern/126703 fs [panic] [zfs] _mtx_lock_sleep: recursed on non-recursi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files f sparc/123566 fs [zfs] zpool import issue: EOVERFLOW o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/120210 fs [zfs] [panic] reboot after panic: solaris assert: arc_ o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using f kern/116170 fs [panic] Kernel panic when mounting /tmp o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro f kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 231 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 12:49:26 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B88651065672 for ; Mon, 6 Jun 2011 12:49:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 72B978FC12 for ; Mon, 6 Jun 2011 12:49:26 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap0EALzL7E2DaFvO/2dsb2JhbABThEqiaLljkDOBK4NsgQoEkHmPVg X-IronPort-AV: E=Sophos;i="4.65,326,1304308800"; d="scan'208";a="123046778" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 06 Jun 2011 08:49:25 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 59EE3B3FB1; Mon, 6 Jun 2011 08:49:25 -0400 (EDT) Date: Mon, 6 Jun 2011 08:49:25 -0400 (EDT) From: Rick Macklem To: =?utf-8?Q?Tobiasz_Siemi=C5=84ski?= Message-ID: <1963869783.149434.1307364565349.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: NFSv4 at Diskless Station X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 12:49:26 -0000 Tobiasz wrote: > Thank you very much for reply. > I have got one more question. How can I force mounting with only > NFSv4? I > have added line "sysctl vfs.nfsd.server_min_nfsvers=4" to > /etc/rc.d/nfsd > file. > But when client is trying mount, firstly it try with NFSv4(but it > fails) and > finally it mount with NFSv3. > Best regards. > Hmm, well, the FreeBSD client won't try and mount using NFSv4 unless you specify "-o nfsv4". (You didn't say what client you are using.) If you want the FreeBSD client to do nfsv4 mounts without the option specified, you'd need to hack the mount_nfs.c sources and replace the binaries. (Although I don't consider the client experimental any more, using NFSv4 might still be looked at that way. At least, few want it to NFSv4 mount, so I haven't made it the default. That would also have not been backwards compatible for the switchover of default NFS client.) Also, if "vfs.nfsd.server_min_nfsvers=4" and "vfs.nfsd.server_max_nfsvers=4", a NFSv3 mount might?? work, but the mount point won't do anything useful, afaik. (ie. since mountd doesn't know about the sysctl, some clients may get past the "mount" without doing an NFSv3 RPC, but no NFSv3 RPC should work.) I don't know if this helps clarify things? rick From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 12:57:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B895C106566C for ; Mon, 6 Jun 2011 12:57:08 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta10.emeryville.ca.mail.comcast.net (qmta10.emeryville.ca.mail.comcast.net [76.96.30.17]) by mx1.freebsd.org (Postfix) with ESMTP id A113E8FC14 for ; Mon, 6 Jun 2011 12:57:08 +0000 (UTC) Received: from omta04.emeryville.ca.mail.comcast.net ([76.96.30.35]) by qmta10.emeryville.ca.mail.comcast.net with comcast id scwG1g0040lTkoCAAcx7Td; Mon, 06 Jun 2011 12:57:07 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta04.emeryville.ca.mail.comcast.net with comcast id scx61g00Q1t3BNj8Qcx6hC; Mon, 06 Jun 2011 12:57:06 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 02621102C36; Mon, 6 Jun 2011 05:57:07 -0700 (PDT) Date: Mon, 6 Jun 2011 05:57:06 -0700 From: Jeremy Chadwick To: Martin Matuska Message-ID: <20110606125706.GA2047@icarus.home.lan> References: <4DECB197.8020102@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DECB197.8020102@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 12:57:08 -0000 On Mon, Jun 06, 2011 at 12:53:11PM +0200, Martin Matuska wrote: > Hi, > > I have merged ZFS version 28 to 8-STABLE (revision 222741) > > New major features: > > - data deduplication > - triple parity RAIDZ (RAIDZ3) > - zfs diff > - zpool split > - snapshot holds > - zpool import -F. Allows to rewind corrupted pool to earlier > transaction group > - possibility to import pool in read-only mode > > For updating, there is a compatibility layer so that in the update phase > most functionality of the new zfs binaries can be used with the old > kernel module and old zfs binaries with the new kernel module. > > If upgrading your boot pool to version 28, please don't forget to read > UPDATING and properly update your boot code. > > Thanks to everyone working on the ZFS port, especially to > Pawel Jakub Dawidek (pjd) for doing most of the work! Thanks for the work on this, guys! I've already managed to find something odd. This message only appears on console, not via pty/tty. icarus# zpool create backups ada2 Solaris(cont): !created version 28 pool backups using 28 src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_history.c contains: 490 void 491 spa_history_log_version(spa_t *spa, history_internal_events_t event) 492 { 493 #ifdef _KERNEL 494 uint64_t current_vers = spa_version(spa); 495 496 if (current_vers >= SPA_VERSION_ZPOOL_HISTORY) { 497 spa_history_log_internal(event, spa, NULL, 498 "pool spa %llu; zfs spa %llu; zpl %d; uts %s %s %s %s", 499 (u_longlong_t)current_vers, SPA_VERSION, ZPL_VERSION, 500 utsname.nodename, utsname.release, utsname.version, 501 utsname.machine); 502 } 503 cmn_err(CE_CONT, "!%s version %llu pool %s using %llu", 504 event == LOG_POOL_IMPORT ? "imported" : 505 event == LOG_POOL_CREATE ? "created" : "accessed", 506 (u_longlong_t)current_vers, spa_name(spa), SPA_VERSION); 507 #endif 508 } A "zpool destroy", etc. does not print any similar message. It only happens on pool creation. Is this intentional behaviour? What does "Solaris(cont)" represent in the context of FreeBSD? -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 13:49:35 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61D71106566B for ; Mon, 6 Jun 2011 13:49:35 +0000 (UTC) (envelope-from spawk@acm.poly.edu) Received: from acm.poly.edu (acm.poly.edu [128.238.9.200]) by mx1.freebsd.org (Postfix) with ESMTP id 035EF8FC08 for ; Mon, 6 Jun 2011 13:49:34 +0000 (UTC) Received: (qmail 62440 invoked from network); 6 Jun 2011 13:49:34 -0000 Received: from unknown (HELO ?10.50.50.207?) (spawk@64.147.100.2) by acm.poly.edu with CAMELLIA256-SHA encrypted SMTP; 6 Jun 2011 13:49:34 -0000 Message-ID: <4DECDB22.7000809@acm.poly.edu> Date: Mon, 06 Jun 2011 09:50:26 -0400 From: Boris Kochergin User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.15) Gecko/20110408 Thunderbird/3.1.9 MIME-Version: 1.0 To: Martin Matuska References: <4DECB197.8020102@FreeBSD.org> In-Reply-To: <4DECB197.8020102@FreeBSD.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 13:49:35 -0000 On 06/06/11 06:53, Martin Matuska wrote: > Hi, > > I have merged ZFS version 28 to 8-STABLE (revision 222741) > > New major features: > > - data deduplication > - triple parity RAIDZ (RAIDZ3) > - zfs diff > - zpool split > - snapshot holds > - zpool import -F. Allows to rewind corrupted pool to earlier > transaction group > - possibility to import pool in read-only mode > > For updating, there is a compatibility layer so that in the update phase > most functionality of the new zfs binaries can be used with the old > kernel module and old zfs binaries with the new kernel module. > > If upgrading your boot pool to version 28, please don't forget to read > UPDATING and properly update your boot code. > > Thanks to everyone working on the ZFS port, especially to > Pawel Jakub Dawidek (pjd) for doing most of the work! > Thanks for everyone's hard work (and the __FreeBSD_version bump!). -Boris From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 19:06:09 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 79A32106566C for ; Mon, 6 Jun 2011 19:06:09 +0000 (UTC) (envelope-from a.smith@ukgrid.net) Received: from mx0.ukgrid.net (mx0.ukgrid.net [89.21.28.41]) by mx1.freebsd.org (Postfix) with ESMTP id 3E54E8FC0C for ; Mon, 6 Jun 2011 19:06:08 +0000 (UTC) Received: from [89.21.28.38] (port=46390 helo=omicron.ukgrid.net) by mx0.ukgrid.net with esmtp (Exim 4.74; FreeBSD) envelope-from a.smith@ukgrid.net envelope-to freebsd-fs@freebsd.org id 1QTf7m-000H6Y-Ff; Mon, 06 Jun 2011 20:06:06 +0100 Received: from 80.174.147.221.dyn.user.ono.com (80.174.147.221.dyn.user.ono.com [80.174.147.221]) by webmail2.ukgrid.net (Horde Framework) with HTTP; Mon, 06 Jun 2011 20:06:06 +0100 Message-ID: <20110606200606.137964cbuj8m1b40@webmail2.ukgrid.net> Date: Mon, 06 Jun 2011 20:06:06 +0100 From: a.smith@ukgrid.net To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Internet Messaging Program (IMP) H3 (4.3.9) / FreeBSD-8.1 Subject: RE: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 19:06:09 -0000 Hi Charles, I have written some Bash scripts that manage my snapshots via crontab and can also replicate the snapshots to a remote server via Ssh. The scripts do: *Run snapshot, (can be once a day, once an hour etc to meet system requirements). Snapshot type is weekly if run at 00h on Sunday, Monthly if run at 00h on the first of the monthly, otherwise is "regular" snapshot. *Regular snapshots are deleted after X weeks (X configurable as command argument) *Weekly snapshots are deleted after 1 year *Monthly snapshots are never automatically removed *Weekly and Monthly snapshots are optional, ie used if defined by command arguments. *If replication is used data is sent via Ssh, either directly piped into zfs receive or sent as a file which can be read in via another script at a later date (useful if you are paranoid and want the secondary ZFS to be some hours behind the primary). *Data is sent via non-root user for security. User must be given the relevant ZFS privileges. *Snapshots are pruned on both source and destination servers. I haven't published the scripts as I haven't put all the polish onto them that I think they would need. But they work as described above and are reliable, I can pass you a copy if they are of interest, cheer Andy. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 6 22:24:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4C652106577A for ; Mon, 6 Jun 2011 22:24:42 +0000 (UTC) (envelope-from bsd@xerq.net) Received: from cartman.xerq.net (cartman.xerq.net [67.52.126.46]) by mx1.freebsd.org (Postfix) with ESMTP id 1EA8F8FC13 for ; Mon, 6 Jun 2011 22:24:41 +0000 (UTC) Received: from cartman.xerq.net (unknown [127.52.126.46]) by cartman.xerq.net (Postfix) with ESMTP id D37521560 for ; Mon, 6 Jun 2011 15:06:59 -0700 (PDT) Received: from cartman.xerq.net ([127.52.126.46]) by cartman.xerq.net (cartman.xerq.net [127.52.126.46]) (amavisd-new, port 10024) with ESMTP id esUp-1fsTFkr for ; Mon, 6 Jun 2011 15:06:57 -0700 (PDT) Received: from www1.xerq.net (localhost [127.52.126.46]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by cartman.xerq.net (Postfix) with ESMTPSA id F33B31547 for ; Mon, 6 Jun 2011 15:06:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=xerq.net; s=dkimlol; t=1307398017; bh=vwy8hMvEFs7aQKwzkJl3S5Q8W0hb+GJPdzPeajocKEo=; h=MIME-Version:Content-Type:Content-Transfer-Encoding:Date:From:To: Subject:In-Reply-To:References:Message-ID; b=b4zPtp0Lzby+xry2Cj8oUWxy0T0wB04Q1n6fq2Of0nq2FQk0RCtKaRh7CCpha3tdx 6lRLMAK/XFTrWYHeZ2o1Z1fruzR4obCoN6YsA+oa+7jJR+5TXU+I+9nnv/+kZLF2bp paN/J40a3h1NbYDumb5N5uCxL1IXfXwNSCUn6Vpc= DomainKey-Signature: a=rsa-sha1; s=default; d=xerq.net; c=nofws; q=dns; h=mime-version:content-type: content-transfer-encoding:date:from:to:subject:in-reply-to:references:message-id: x-sender:user-agent; b=teWndJTRuEJ3Q2SnjCfcc2vWrQtq1yqoSkl3rRFf6U8+oA2ABROQCSuBnGQjnkBoQ foRjpoPo/XJVGMDgtgE7Q== MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 06 Jun 2011 15:06:56 -0700 From: Matt Connor To: In-Reply-To: References: Message-ID: X-Sender: bsd@xerq.net User-Agent: XERQ Webmail/0.5.2 Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2011 22:24:42 -0000 On Sun, 5 Jun 2011 22:32:55 -0400 (EDT), Charles Sprickman wrote: > Since there seems to be a fair number of serious zfs users here, I > was > hoping for some further suggestions for use in either case. Any > input is > welcome... We've had plenty of good experiences with sysutils/zfs-snapshot-mgmt -Matt From owner-freebsd-fs@FreeBSD.ORG Tue Jun 7 06:07:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E46B3106564A; Tue, 7 Jun 2011 06:07:11 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-iy0-f182.google.com (mail-iy0-f182.google.com [209.85.210.182]) by mx1.freebsd.org (Postfix) with ESMTP id 7B73A8FC16; Tue, 7 Jun 2011 06:07:11 +0000 (UTC) Received: by iyj12 with SMTP id 12so5673938iyj.13 for ; Mon, 06 Jun 2011 23:07:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:date:from:to:cc:subject:message-id :reply-to:references:mime-version:content-type:content-disposition :in-reply-to:x-openpgp-key-id:x-openpgp-key-fingerprint :x-openpgp-key-url; bh=qYoGzsrFF+aV+FKWIvpyztwB5nnBcsEfC2SSy7ZQetw=; b=XWCPfW13SQjV7AR++3HuzbIcg8tHoRvneruWxdBg7y+kGAjapIUEFVR4ji+BvgFSpl JlNBpQVMYlRgTg+jb5Z6u3Y8YVWROSW4IO9bfp8P7WiZD43ZSfhgD1NvoxWeRJu+U//Z WnuYjX9/UKcFHr2WwT0lEdq+5iO3joSMuVHzU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:message-id:reply-to:references :mime-version:content-type:content-disposition:in-reply-to :x-openpgp-key-id:x-openpgp-key-fingerprint:x-openpgp-key-url; b=dlppni1s8A1B+7It9hsnLmwFSAEhW3tUHp5PCxpWEaMCZybpdE5WC5HdmV949EIGo+ cZSljpDj3Evn4nTRGSc46sd8X3zciCGRdh2h6SdF1UtnPynhsfpcFvwahKcv9z/IoloC L25YyDdsCFUSZUXc0covBV8d3SusXJ+3FffiA= Received: by 10.43.55.84 with SMTP id vx20mr9209732icb.49.1307426830760; Mon, 06 Jun 2011 23:07:10 -0700 (PDT) Received: from DataIX.net (adsl-99-19-42-166.dsl.klmzmi.sbcglobal.net [99.19.42.166]) by mx.google.com with ESMTPS id ex14sm3382644icb.13.2011.06.06.23.07.08 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 06 Jun 2011 23:07:09 -0700 (PDT) Sender: "J. Hellenthal" Received: from DataIX.net (localhost [127.0.0.1]) by DataIX.net (8.14.4/8.14.4) with ESMTP id p576744m012959 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 7 Jun 2011 02:07:05 -0400 (EDT) (envelope-from jhell@DataIX.net) Received: (from jhell@localhost) by DataIX.net (8.14.4/8.14.4/Submit) id p57673nN012958; Tue, 7 Jun 2011 02:07:03 -0400 (EDT) (envelope-from jhell@DataIX.net) Date: Tue, 7 Jun 2011 02:07:03 -0400 From: Jason Hellenthal To: Jeremy Chadwick Message-ID: <20110607060703.GA80203@DataIX.net> References: <4DECB197.8020102@FreeBSD.org> <20110606125706.GA2047@icarus.home.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="FCuugMFkClbJLl1L" Content-Disposition: inline In-Reply-To: <20110606125706.GA2047@icarus.home.lan> X-OpenPGP-Key-Id: 0x89D8547E X-OpenPGP-Key-Fingerprint: 85EF E26B 07BB 3777 76BE B12A 9057 8789 89D8 547E X-OpenPGP-Key-URL: http://bit.ly/0x89D8547E Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: jhell@DataIX.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jun 2011 06:07:12 -0000 --FCuugMFkClbJLl1L Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Jeremy, On Mon, Jun 06, 2011 at 05:57:06AM -0700, Jeremy Chadwick wrote: > On Mon, Jun 06, 2011 at 12:53:11PM +0200, Martin Matuska wrote: > > Hi, > >=20 > > I have merged ZFS version 28 to 8-STABLE (revision 222741) > >=20 > > New major features: > >=20 > > - data deduplication > > - triple parity RAIDZ (RAIDZ3) > > - zfs diff > > - zpool split > > - snapshot holds > > - zpool import -F. Allows to rewind corrupted pool to earlier > > transaction group > > - possibility to import pool in read-only mode > >=20 > > For updating, there is a compatibility layer so that in the update phase > > most functionality of the new zfs binaries can be used with the old > > kernel module and old zfs binaries with the new kernel module. > >=20 > > If upgrading your boot pool to version 28, please don't forget to read > > UPDATING and properly update your boot code. > >=20 > > Thanks to everyone working on the ZFS port, especially to > > Pawel Jakub Dawidek (pjd) for doing most of the work! >=20 > Thanks for the work on this, guys! >=20 > I've already managed to find something odd. This message only appears > on console, not via pty/tty. >=20 > icarus# zpool create backups ada2 > Solaris(cont): !created version 28 pool backups using 28 >=20 > src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_history.c > contains: >=20 > 490 void > 491 spa_history_log_version(spa_t *spa, history_internal_events_t event) > 492 { > 493 #ifdef _KERNEL > 494 uint64_t current_vers =3D spa_version(spa); > 495 > 496 if (current_vers >=3D SPA_VERSION_ZPOOL_HISTORY) { > 497 spa_history_log_internal(event, spa, NULL, > 498 "pool spa %llu; zfs spa %llu; zpl %d; uts %s %s %= s %s", > 499 (u_longlong_t)current_vers, SPA_VERSION, ZPL_VERS= ION, > 500 utsname.nodename, utsname.release, utsname.versio= n, > 501 utsname.machine); > 502 } > 503 cmn_err(CE_CONT, "!%s version %llu pool %s using %llu", > 504 event =3D=3D LOG_POOL_IMPORT ? "imported" : > 505 event =3D=3D LOG_POOL_CREATE ? "created" : "accessed", > 506 (u_longlong_t)current_vers, spa_name(spa), SPA_VERSION); > 507 #endif > 508 } >=20 > A "zpool destroy", etc. does not print any similar message. It only > happens on pool creation. Is this intentional behaviour? What does > "Solaris(cont)" represent in the context of FreeBSD? Nothing other than a line in-difference that someone does not have to track from import/merge's. The smaller the diff's the better ;) Would be nice to trap this at some other point so it doesnt have to be displayed or changed through the whole structure of files. /me is waiting for the next big move... ;) Cheers to all the ZFS folks out there! --=20 "Unity can only be manifested by the Binary. Unity itself and the idea of U= nity are already two." -- Buddha Regards, (jhell) Jason Hellenthal --FCuugMFkClbJLl1L Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (FreeBSD) Comment: http://bit.ly/0x89D8547E iQEcBAEBAgAGBQJN7cAGAAoJEJBXh4mJ2FR++24H/RzuViZWlwcpw0EkQE0WaSGu x1+LJZKHMTOWUgC+vcxKijLOjdqp+7x7HJJ/mzsPan3McZEsH3zVAqxKl666TYhu p8MSXX/AzR5d+OKQ+J7J/8/SR/ohvUhDp/v9UkFae6xPRkHNprZGJ1EgJDXl+of/ gR+fdZv9VkbhwFFvN30fiyetdllFkwV3KSlpH7KWu3gceRmipIqT8w/CNKll9jKj 9WRYe3UiVppFSkqg2GCr/lorSjuDka5X/6C4fHM7v0Q9ytfqcIoKEducDdnWM4Ue sCdBflZCdx7CaHFkX4UCI/PKwaKYbQCZEdfLRZ8upki0PI+J8kyEGq19OY7JcV8= =WzWw -----END PGP SIGNATURE----- --FCuugMFkClbJLl1L-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 7 09:29:15 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 026461065673 for ; Tue, 7 Jun 2011 09:29:15 +0000 (UTC) (envelope-from bartosz.stec@it4pro.pl) Received: from mainframe.kkip.pl (kkip.pl [87.105.164.78]) by mx1.freebsd.org (Postfix) with ESMTP id A8DE78FC16 for ; Tue, 7 Jun 2011 09:29:14 +0000 (UTC) Received: from mb01.admin.lan.kkip.pl ([10.66.3.0]) by mainframe.kkip.pl with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QTsav-0007PU-IF for freebsd-fs@freebsd.org; Tue, 07 Jun 2011 11:29:12 +0200 Message-ID: <4DEDEF61.7040008@it4pro.pl> Date: Tue, 07 Jun 2011 11:29:05 +0200 From: Bartosz Stec Organization: IT4Pro User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9.2.17) Gecko/20110414 Lightning/1.0b2 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Authenticated-User: bartosz.stec@it4pro.pl X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Spam-Score: -8.1 X-Spam-Score-Int: -80 X-Exim-Version: 4.76 (build at 12-May-2011 10:41:54) X-Date: 2011-06-07 11:29:12 X-Connected-IP: 10.66.3.0:3761 X-Message-Linecount: 56 X-Body-Linecount: 42 X-Message-Size: 2275 X-Body-Size: 1613 X-Received-Count: 1 X-Recipient-Count: 1 X-Local-Recipient-Count: 1 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jun 2011 09:29:15 -0000 W dniu 2011-06-06 04:32, Charles Sprickman pisze: > Hello all, > > I've been using a few different tools to manage zfs snapshots in > different > scenarios. For local use, I've found that Ralf Engelschall's set of > scripts[1] that tie into the periodic(8) system work fairly well. I do > not use the amd portion since I am only working with zfs snapshots and I > don't see a need to actually re-mount the snapshots elsewhere for > recovery. The only limitation I find with this system is that for use on > a backups host the lack of a monthly or yearly retention period pretty > much rules it out. For local "oops" stuff though, it's great. > > For hosts acting as backups servers, I've been using Snapfilter[2] and > some cobbled together stuff that rsyncs a bunch of hosts and tries to > detect and notify on errors. Snapfilter simply is the zfs snapshot > "sweeper" that periodically deletes snapshots that are outside the > defined > retention period(s). > > Since there seems to be a fair number of serious zfs users here, I was > hoping for some further suggestions for use in either case. Any input is > welcome... > > Thanks, > > Charles > > [1] - http://people.freebsd.org/~rse/snapshot/ > [2] - http://www.scottlu.com/Content/Snapfilter.html I'm using periodic scripts from here: http://www.neces.com/blog/technology/integrating-freebsd-zfs-and-periodic-snapshots-and-scrubs as simple "set and forget" zfs snapshot management approach. They are included into ports tree too: sysutils/zfs-periodic/ I found them much more convenient than Ralf S. Engelschall's solution. Cheers! -- Bartosz Stec From owner-freebsd-fs@FreeBSD.ORG Tue Jun 7 15:44:35 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 960AE106564A for ; Tue, 7 Jun 2011 15:44:35 +0000 (UTC) (envelope-from gpm@hotplug.ru) Received: from gate.pikinvest.ru (gate.pikinvest.ru [87.245.155.170]) by mx1.freebsd.org (Postfix) with ESMTP id 4AAF18FC0A for ; Tue, 7 Jun 2011 15:44:34 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mailgate.pik.ru (Postfix) with ESMTP id EEA541C0874 for ; Tue, 7 Jun 2011 19:44:31 +0400 (MSD) Received: from EX03PIK.PICompany.ru (unknown [192.168.156.51]) by mailgate.pik.ru (Postfix) with ESMTP id ECEE11C0873 for ; Tue, 7 Jun 2011 19:44:31 +0400 (MSD) Received: from [192.168.148.9] ([192.168.148.9]) by EX03PIK.PICompany.ru with Microsoft SMTPSVC(6.0.3790.4675); Tue, 7 Jun 2011 19:40:33 +0400 Message-ID: <4DEE4654.6060404@hotplug.ru> Date: Tue, 07 Jun 2011 19:40:04 +0400 From: Emil Muratov User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 07 Jun 2011 15:40:33.0681 (UTC) FILETIME=[3D27D810:01CC2529] Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jun 2011 15:44:35 -0000 >> Since there seems to be a fair number of serious zfs users here, I was >> hoping for some further suggestions for use in either case. Any >> input is >> welcome... > > We've had plenty of good experiences with sysutils/zfs-snapshot-mgmt I have had problems with this tool regarding daylight saving time changes and snapshot aging calculation. I wrote to the author but didn't get any response. I used zfSnap (/usr/ports/sysutils/zfsnap) as a very good alternative for quite a long time, but now it's broken for v28 since zfs -r behavior for snapshots has changed. Hope there will be a fix for it someday. From owner-freebsd-fs@FreeBSD.ORG Tue Jun 7 16:04:14 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 96F141065674 for ; Tue, 7 Jun 2011 16:04:14 +0000 (UTC) (envelope-from marco@tolstoy.tols.org) Received: from tolstoy.tols.org (tolstoy.tols.org [IPv6:2a02:898:0:20::57:1]) by mx1.freebsd.org (Postfix) with ESMTP id 2DF3A8FC08 for ; Tue, 7 Jun 2011 16:04:13 +0000 (UTC) Received: from tolstoy.tols.org (localhost [127.0.0.1]) by tolstoy.tols.org (8.14.4/8.14.4) with ESMTP id p57G471b048617 for ; Tue, 7 Jun 2011 16:04:07 GMT (envelope-from marco@tolstoy.tols.org) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.97 at tolstoy.tols.org Received: (from marco@localhost) by tolstoy.tols.org (8.14.4/8.14.4/Submit) id p57G47rD048616 for freebsd-fs@freebsd.org; Tue, 7 Jun 2011 18:04:07 +0200 (CEST) (envelope-from marco) Date: Tue, 7 Jun 2011 18:04:07 +0200 From: Marco van Tol To: freebsd-fs@freebsd.org Message-ID: <20110607160406.GC43075@tolstoy.tols.org> Mail-Followup-To: freebsd-fs@freebsd.org References: <4DEE4654.6060404@hotplug.ru> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DEE4654.6060404@hotplug.ru> User-Agent: Mutt/1.4.2.3i X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on tolstoy.tols.org Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jun 2011 16:04:14 -0000 On Tue, Jun 07, 2011 at 07:40:04PM +0400, Emil Muratov wrote: > > >>Since there seems to be a fair number of serious zfs users here, I was > >>hoping for some further suggestions for use in either case. Any > >>input is > >>welcome... > > > >We've had plenty of good experiences with sysutils/zfs-snapshot-mgmt > > I have had problems with this tool regarding daylight saving time > changes and snapshot aging calculation. I wrote to the author but didn't > get any response. > I used zfSnap (/usr/ports/sysutils/zfsnap) as a very good alternative > for quite a long time, but now it's broken for v28 since zfs -r behavior > for snapshots has changed. Hope there will be a fix for it someday. Isn't it so that cron is aware of daylight savings time changes, and will not run the same cronjob twice if on a given day hour x occurs twice? Or can be configured to be aware of that? An alternative I'm using at the moment is to have root be in UTC. Downside is that your snapshot names will be in UTC time. Just thinking outloud, didn't give it lots of thought. Marco -- Success is having to worry about every damn thing in the world, except money. - Johnny Cash From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 01:11:35 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C5F431065670 for ; Wed, 8 Jun 2011 01:11:35 +0000 (UTC) (envelope-from universite@ukr.net) Received: from otrada.od.ua (universite-1-pt.tunnel.tserv24.sto1.ipv6.he.net [IPv6:2001:470:27:140::2]) by mx1.freebsd.org (Postfix) with ESMTP id 30F868FC0C for ; Wed, 8 Jun 2011 01:11:34 +0000 (UTC) Received: from [IPv6:2001:470:28:140:a0fd:524d:3076:336] ([IPv6:2001:470:28:140:a0fd:524d:3076:336]) (authenticated bits=0) by otrada.od.ua (8.14.4/8.14.4) with ESMTP id p581BUJn062516 for ; Wed, 8 Jun 2011 04:11:30 +0300 (EEST) (envelope-from universite@ukr.net) Message-ID: <4DEECC28.9060109@ukr.net> Date: Wed, 08 Jun 2011 04:11:04 +0300 From: "Vladislav V. Prodan" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4DDC0D13.3030401@ukr.net> <4DE91C38.8030602@ukr.net> <4DE945DC.30201@ukr.net> In-Reply-To: <4DE945DC.30201@ukr.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-95.5 required=5.0 tests=FREEMAIL_FROM,FSL_RU_URL, RDNS_NONE, SPF_SOFTFAIL, T_TO_NO_BRKTS_FREEMAIL, USER_IN_WHITELIST autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mary-teresa.otrada.od.ua X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (otrada.od.ua [IPv6:2001:470:28:140::5]); Wed, 08 Jun 2011 04:11:33 +0300 (EEST) Subject: Re: how to import raidz2, if only one disk is missing? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 01:11:35 -0000 03.06.2011 23:36, Vladislav V. Prodan wrote: > 03.06.2011 20:39, Vladislav V. Prodan wrote: >> I put a new HDD to 2 terabytes, but the pool does not see gpt/disk3 :( > how to properly set the LABEL [0-4]? > Updated 8.2-STABLE, when there added ZFS v28 During the import did not work neither of the commands zfs or zpool. Because of this, suffered all the zfs pools on this server. # zpool import -fFX tank Pool tank returned to its state as of понедельник, 23 мая 2011 г. 18:02:15. Discarded approximately 401 minutes of transactions. Problem #1 lost connection in the pool for gpt label # zpool status tank pool: tank state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 ad8p1 ONLINE 0 0 0 ad12p1 ONLINE 0 0 0 ad16p1 ONLINE 0 0 0 8811702962298963660 OFFLINE 0 0 0 was /dev/gpt/disk3 ad14p1 ONLINE 0 0 0 ad18p1 ONLINE 0 0 0 Problem #2 Low speed command "zpool scrub tank" # zpool status tank pool: tank state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: scrub in progress since Wed Jun 8 03:56:39 2011 6,30M scanned out of 2,79T at 922K/s, (scan is slow, no estimated time) 0 repaired, 0,00% done config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 ad8p1 ONLINE 0 0 0 ad12p1 ONLINE 0 0 0 ad16p1 ONLINE 0 0 0 8811702962298963660 OFFLINE 0 0 0 was /dev/gpt/disk3 ad14p1 ONLINE 0 0 0 ad18p1 ONLINE 0 0 0 errors: No known data errors -- Vladislav V. Prodan VVP24-UANIC +380[67]4584408 +380[99]4060508 vlad11@jabber.ru From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 03:53:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7127106564A for ; Wed, 8 Jun 2011 03:53:10 +0000 (UTC) (envelope-from rsimmons0@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id AA62C8FC1D for ; Wed, 8 Jun 2011 03:53:10 +0000 (UTC) Received: by ywf7 with SMTP id 7so56313ywf.13 for ; Tue, 07 Jun 2011 20:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to :content-type; bh=n8zkC8H9I7NLs7w8ysnXWkiljenJhmYEdywn6CwPbFI=; b=cApP9SwUdbtVQKrCrpPeRYwVpXbgGx/Un0QwgjK2aqmVDZav3cHKAU4umb/VcRrknU v88yEmksh63er+4JZL134HyJ49pikiqm/jJy66I3wikCNEQJUiXdMC+cTaJT0c7W+ljr XL+wl6IAwNQdWtwr/P6Cp++WWNmJ0pywKd5GI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=E9joP4cjGtbc4DkPb+38IcJ7TeaJFf5E3jmHohRz90N4+W0freM+tpYxau8Vzszhke 7f1UUfTj89e8NeKiO3fNe+vhCBhoM5RvaYS5VwaD78r8TYEVfUBjQhjBLAHcG069VjUI NzgrWx8QYExqofhY+FurPrUqd6mM2LrTP87R8= MIME-Version: 1.0 Received: by 10.101.108.14 with SMTP id k14mr5594822anm.89.1307503644929; Tue, 07 Jun 2011 20:27:24 -0700 (PDT) Received: by 10.100.243.35 with HTTP; Tue, 7 Jun 2011 20:27:24 -0700 (PDT) Date: Tue, 7 Jun 2011 23:27:24 -0400 Message-ID: From: Robert Simmons To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 03:53:11 -0000 Do all HDDs that have 4KB per LBA present themselves to the OS as having 512 bytes per LBA? What about SSDs that have 1024 bytes per LBA? From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 04:49:55 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25C0B106566B for ; Wed, 8 Jun 2011 04:49:55 +0000 (UTC) (envelope-from feld@feld.me) Received: from mwi1.coffeenet.org (mwi1.coffeenet.org [66.170.3.2]) by mx1.freebsd.org (Postfix) with ESMTP id F41238FC19 for ; Wed, 8 Jun 2011 04:49:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=feld.me; s=blargle; h=In-Reply-To:Message-Id:From:Mime-Version:Date:References:Subject:To:Content-Type; bh=+Ri4oSM+DIyD/yAB4KQmRoMC+q4q2pO+OHGwDe/mNgU=; b=nQY9byuDG5AyErxqydJZmO0VvYzOtLq4amxAm4LcXmjbG/QxrKteECZwE4NICqb0zQ2uAaT4xtR+K0+ebfEj1IhdP+egfDGMeHNIPbvQNzsYEOj0/Df4BXO4i1xgMm3V; Received: from localhost ([127.0.0.1] helo=mwi1.coffeenet.org) by mwi1.coffeenet.org with esmtp (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUAQA-0008vi-Co for freebsd-fs@freebsd.org; Tue, 07 Jun 2011 23:31:10 -0500 Received: from feld@feld.me by mwi1.coffeenet.org (Archiveopteryx 3.1.3) with esmtpsa id 1307507464-36980-36979/7/3; Wed, 8 Jun 2011 04:31:04 +0000 Content-Type: text/plain; format=flowed; delsp=yes To: freebsd-fs@freebsd.org References: Date: Tue, 7 Jun 2011 23:29:32 -0500 Mime-Version: 1.0 From: Mark Felder Message-Id: In-Reply-To: User-Agent: Opera Mail/11.50 (FreeBSD) Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 04:49:55 -0000 On Tue, 07 Jun 2011 22:27:24 -0500, Robert Simmons wrote: > Do all HDDs that have 4KB per LBA present themselves to the OS as > having 512 bytes per LBA? No > What about SSDs that have 1024 bytes per LBA? Not sure, but I do know that not all flash media have the same bytes per LBA internally. Some are 1K, some 4K, some even 8K. GPT is definitely the way to go if you want to make sure you're aligned. Regards, Mark From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 05:10:07 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D2B2F1065673 for ; Wed, 8 Jun 2011 05:10:07 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta15.westchester.pa.mail.comcast.net (qmta15.westchester.pa.mail.comcast.net [76.96.59.228]) by mx1.freebsd.org (Postfix) with ESMTP id 7FA8F8FC17 for ; Wed, 8 Jun 2011 05:10:07 +0000 (UTC) Received: from omta24.westchester.pa.mail.comcast.net ([76.96.62.76]) by qmta15.westchester.pa.mail.comcast.net with comcast id tH2n1g0021ei1Bg5FHA7dH; Wed, 08 Jun 2011 05:10:07 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta24.westchester.pa.mail.comcast.net with comcast id tHA61g00V1t3BNj3kHA7Be; Wed, 08 Jun 2011 05:10:07 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 4B056102C37; Tue, 7 Jun 2011 22:10:05 -0700 (PDT) Date: Tue, 7 Jun 2011 22:10:05 -0700 From: Jeremy Chadwick To: Mark Felder Message-ID: <20110608051005.GA83928@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 05:10:07 -0000 On Tue, Jun 07, 2011 at 11:29:32PM -0500, Mark Felder wrote: > On Tue, 07 Jun 2011 22:27:24 -0500, Robert Simmons > wrote: > > >Do all HDDs that have 4KB per LBA present themselves to the OS as > >having 512 bytes per LBA? > > No > > >What about SSDs that have 1024 bytes per LBA? > > Not sure, but I do know that not all flash media have the same bytes > per LBA internally. Some are 1K, some 4K, some even 8K. GPT is > definitely the way to go if you want to make sure you're aligned. Maybe I've misread what you've wrote, but since when was GPT a requirement for partition boundary alignment? -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 05:29:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8B4E71065670 for ; Wed, 8 Jun 2011 05:29:44 +0000 (UTC) (envelope-from rsimmons0@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id 49A3B8FC18 for ; Wed, 8 Jun 2011 05:29:44 +0000 (UTC) Received: by gxk28 with SMTP id 28so82176gxk.13 for ; Tue, 07 Jun 2011 22:29:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=MRtx3KdjRRzNv/yMC2Le6H2vyZ8iNFuJb1Q//VJQjrw=; b=fHRspVnWI31Mip/qj3hRGPWOy5s4LCAnj4pRYAXUCSq2P44C2lhVrRRCUQy6cfLBYM WHANa1On9zeSFClQRGhuiEaaRJEka77kDsT280omshuv/LmS8a0QiItWAD4lkWZxpWt+ T5QjNoVAyU65kEi0L6r4xB02ddNhGPU1074zA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=NnT7UEOA65U3BGnwOxs4NFXEsIKwSY64Cv5rmKgDtRxNMPdF4ZCkj4FkNT7pdPZ3qe P4wWSLvEcosPD+teqtiYUJ1YJS916FVcb7ZIVhK6JtwrbxahcKULWPyhAkuqK6hJlczm v2O3L/HfwUMSCJD7f5ygWMKUtqljjGsht1EL8= MIME-Version: 1.0 Received: by 10.101.152.32 with SMTP id e32mr5676989ano.45.1307510983464; Tue, 07 Jun 2011 22:29:43 -0700 (PDT) Received: by 10.100.243.35 with HTTP; Tue, 7 Jun 2011 22:29:43 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 Jun 2011 01:29:43 -0400 Message-ID: From: Robert Simmons To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 05:29:44 -0000 On Wed, Jun 8, 2011 at 12:29 AM, Mark Felder wrote: > On Tue, 07 Jun 2011 22:27:24 -0500, Robert Simmons > wrote: >> Do all HDDs that have 4KB per LBA present themselves to the OS as >> having 512 bytes per LBA? > > No Ok, but can I assume that all HDDs of this type expand each of the 4K sectors so that physically they take up the same space as eight 512 byte LBAs? AFAIK, the new 4K LBA has a smaller ECC area than the sum of 8 ECC areas in 512 byte LBAs, so if the data area was _not_ expanded slightly, you would never really be aligned except every x LBAs as the shifting approaches an LBA boundary, right? For any HDDs, do I need to worry about cylinder boundaries at all? Has the reported "disk geometry" become divorced from the physical reality in modern disks? If I do still need to worry about cylinder boundaries, should I basically ignore every reported geometry (BIOS, OS) and use what is written on the sticker on the drive? >> What about SSDs that have 1024 bytes per LBA? > > Not sure, but I do know that not all flash media have the same bytes per LBA > internally. Some are 1K, some 4K, some even 8K. GPT is definitely the way to > go if you want to make sure you're aligned. Ok, is there some way to tell gpart(8) what the LBA size is, or do I have to calculate the offset of each partition manually? In Linux it would be "fdisk -b 1024" for the example of SSDs or "fdisk -b 4096" for 4K HDDs. Can I just ignore the idea of "cylinder boundaries" completely when dealing with SSDs and flash memory? From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 07:26:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B21541065672 for ; Wed, 8 Jun 2011 07:26:01 +0000 (UTC) (envelope-from numisemis@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 3B2AD8FC1B for ; Wed, 8 Jun 2011 07:26:00 +0000 (UTC) Received: by bwz12 with SMTP id 12so265491bwz.13 for ; Wed, 08 Jun 2011 00:26:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=ILycqe3t6IO7xKpkUOAWQvWU/oCrAECdHUxrJ47wuT8=; b=prW94cT8tvzB5yp3vF3U9tSZLRnsUU3DZUdyVpFmBythVc5ARnKum7sUOiqUQizEWL NWdOnB/NZivUZne/HW+27HcQaoiiGbuhIuvaHxyHFveg1sJgAIcpjzrnRACkz/pYsY5R ONKV5wVFf8HDu0JcYu6fO9eoo7osdjSyWnNRQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Cs+55SDt/y/zeiR2n/UANIVL+nLTxY94k2EYudSFDcrc/CZJQJjfeBHbYdFMhoJBB3 +axecVU0Viz47A7JK9wUyDtIzxn4CeL4oZCZFEHCYPjnTrf5jKAlYhawO4bXWe+zvsuy 2a9qVXZF2zwSXwl7EXC6waBOZMjubjyzV4K+4= MIME-Version: 1.0 Received: by 10.204.152.5 with SMTP id e5mr326634bkw.138.1307517225930; Wed, 08 Jun 2011 00:13:45 -0700 (PDT) Received: by 10.204.180.139 with HTTP; Wed, 8 Jun 2011 00:13:45 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 Jun 2011 09:13:45 +0200 Message-ID: From: =?UTF-8?Q?=C5=A0imun_Mikecin?= To: Robert Simmons Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 07:26:01 -0000 2011/6/8 Robert Simmons wrote: > Ok, but can I assume that all HDDs of this type expand each of the 4K sectors so that physically they take up the same space as eight 512 > byte LBAs? AFAIK, the new 4K LBA has a smaller ECC area than the sum > of 8 ECC areas in 512 byte LBAs, so if the data area was _not_ > expanded slightly, you would never really be aligned except every x > LBAs as the shifting approaches an LBA boundary, right? Wrong, leave ECC out of the equation. ECC size is totally transparent and hidden to everything except the drive itself. Sector sizes that drives present to outside world contain only data part, so 512 or 4K is the size of data part. > For any HDDs, do I need to worry about cylinder boundaries at all? > Has the reported "disk geometry" become divorced from the physical > reality in modern disks? If I do still need to worry about cylinder > boundaries, should I basically ignore every reported geometry (BIOS, > OS) and use what is written on the sticker on the drive? > Can I just ignore the idea of "cylinder boundaries" completely when > dealing with SSDs and flash memory? Ignore geometries and cylinder boundaries for all of them (modern hard drives, SSD and flash memory). Those exist only for compatibility reasons. Physical drive geometry on modern hard drives is hidden (and probably is asymmetric), so there is no point in trying to optimize by using cylinder boundaries. From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 07:46:43 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B6F691065674 for ; Wed, 8 Jun 2011 07:46:43 +0000 (UTC) (envelope-from gpm@hotplug.ru) Received: from gate.pikinvest.ru (gate.pikinvest.ru [87.245.155.170]) by mx1.freebsd.org (Postfix) with ESMTP id 6B3B78FC1A for ; Wed, 8 Jun 2011 07:46:43 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mailgate.pik.ru (Postfix) with ESMTP id B29ED1C0880 for ; Wed, 8 Jun 2011 11:46:40 +0400 (MSD) Received: from EX03PIK.PICompany.ru (unknown [192.168.156.51]) by mailgate.pik.ru (Postfix) with ESMTP id B0B731C0879 for ; Wed, 8 Jun 2011 11:46:40 +0400 (MSD) Received: from [192.168.148.9] ([192.168.148.9]) by EX03PIK.PICompany.ru with Microsoft SMTPSVC(6.0.3790.4675); Wed, 8 Jun 2011 11:46:36 +0400 Message-ID: <4DEF28BE.4090303@hotplug.ru> Date: Wed, 08 Jun 2011 11:46:06 +0400 From: Emil Muratov User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: <4DEE4654.6060404@hotplug.ru> <20110607160406.GC43075@tolstoy.tols.org> In-Reply-To: <20110607160406.GC43075@tolstoy.tols.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 08 Jun 2011 07:46:36.0295 (UTC) FILETIME=[3190D970:01CC25B0] Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 07:46:43 -0000 >>> We've had plenty of good experiences with sysutils/zfs-snapshot-mgmt >> I have had problems with this tool regarding daylight saving time >> changes and snapshot aging calculation. I wrote to the author but didn't >> get any response. >> I used zfSnap (/usr/ports/sysutils/zfsnap) as a very good alternative >> for quite a long time, but now it's broken for v28 since zfs -r behavior >> for snapshots has changed. Hope there will be a fix for it someday. > Isn't it so that cron is aware of daylight savings time changes, and > will not run the same cronjob twice if on a given day hour x occurs > twice? Or can be configured to be aware of that? > > An alternative I'm using at the moment is to have root be in UTC. > Downside is that your snapshot names will be in UTC time. > No, cron is fine. The problem was in the script itself. As far as I remember it calculates snapshot age by checking if snapshot age in minutes is even to 24h, which is definitely not true after daylight shift. So such snapshots were not purged properly. From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 07:55:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 52A41106566B for ; Wed, 8 Jun 2011 07:55:28 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta12.emeryville.ca.mail.comcast.net (qmta12.emeryville.ca.mail.comcast.net [76.96.27.227]) by mx1.freebsd.org (Postfix) with ESMTP id 39BB48FC13 for ; Wed, 8 Jun 2011 07:55:27 +0000 (UTC) Received: from omta21.emeryville.ca.mail.comcast.net ([76.96.30.88]) by qmta12.emeryville.ca.mail.comcast.net with comcast id tKv41g0011u4NiLACKvS5C; Wed, 08 Jun 2011 07:55:26 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta21.emeryville.ca.mail.comcast.net with comcast id tKvE1g00B1t3BNj8hKvFah; Wed, 08 Jun 2011 07:55:15 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 779DD102C37; Wed, 8 Jun 2011 00:55:26 -0700 (PDT) Date: Wed, 8 Jun 2011 00:55:26 -0700 From: Jeremy Chadwick To: Robert Simmons Message-ID: <20110608075526.GA85577@icarus.home.lan> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 07:55:28 -0000 On Wed, Jun 08, 2011 at 01:29:43AM -0400, Robert Simmons wrote: > On Wed, Jun 8, 2011 at 12:29 AM, Mark Felder wrote: > > On Tue, 07 Jun 2011 22:27:24 -0500, Robert Simmons > > wrote: > >> Do all HDDs that have 4KB per LBA present themselves to the OS as > >> having 512 bytes per LBA? > > > > No > > Ok, but can I assume that all HDDs of this type expand each of the 4K > sectors so that physically they take up the same space as eight 512 > byte LBAs? AFAIK, the new 4K LBA has a smaller ECC area than the sum > of 8 ECC areas in 512 byte LBAs, so if the data area was _not_ > expanded slightly, you would never really be aligned except every x > LBAs as the shifting approaches an LBA boundary, right? > > For any HDDs, do I need to worry about cylinder boundaries at all? > Has the reported "disk geometry" become divorced from the physical > reality in modern disks? If I do still need to worry about cylinder > boundaries, should I basically ignore every reported geometry (BIOS, > OS) and use what is written on the sticker on the drive? > > >> What about SSDs that have 1024 bytes per LBA? > > > > Not sure, but I do know that not all flash media have the same bytes per LBA > > internally. Some are 1K, some 4K, some even 8K. GPT is definitely the way to > > go if you want to make sure you're aligned. > > Ok, is there some way to tell gpart(8) what the LBA size is, or do I > have to calculate the offset of each partition manually? In Linux it > would be "fdisk -b 1024" for the example of SSDs or "fdisk -b 4096" > for 4K HDDs. I would think you'd just use "gpart -b" to specify the base offset. For example, on an Intel 320-series SSD (which uses a NAND flash cell size of 8192 bytes), "gpart -b 8" should end up at byte 65536 within the flash itself. I'm not sure if using 8 is correct though -- that is to say, I believe there is other space near the beginning of the drive which is used for things like the boot loader (I don't mean boot0, I mean boot2/loader and friends), or for the GPT loader or GPT ZFS loader. I could be wrong on this part -- need someone to correct me. All these different loaders and GPT support on FreeBSD seriously makes my head spin. Anyway back to SSDs: I have yet to see anyone list off all the *actual* NAND flash cell sizes of SSDs. For example, everyone said "4KBytes" for Intel SSDs, but come to find out it's actually 8KBytes. Don't confuse NAND flash cell size with NAND erase page size. They're two different things. Multiple cells make up (fit into) a single erase page. The alignment issue only applies to the cell part, not the erase page size. (I just had a discussion with an end-user on Intel's forum about this; someone had lead him to believe the erase page size was what he should align to). For example, on Intel 320-series drives, the NAND erase page size is 256 cells, thus 256*8192 = 2097152, or 2MBytes. Just a technical FYI bit for those curious. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 08:08:22 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7B01E1065672 for ; Wed, 8 Jun 2011 08:08:22 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 239F28FC13 for ; Wed, 8 Jun 2011 08:08:21 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC41CD9.dip.t-dialin.net [79.196.28.217]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 692FC844015; Wed, 8 Jun 2011 10:08:08 +0200 (CEST) Received: from webmail.leidinger.net (webmail.Leidinger.net [IPv6:fd73:10c7:2053:1::3:102]) by outgoing.leidinger.net (Postfix) with ESMTP id B2DAC3AEA; Wed, 8 Jun 2011 10:08:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=Leidinger.net; s=outgoing-alex; t=1307520485; bh=44h8etKJYDfBsL71LwyoDuvforKKZlTXgqHDgw20fDw=; h=Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding; b=JlLqxS8a8bq6TUFIHB/AQRkCxFBayug0AnPFrEwa4pSTcLyrCbxrXwXS2sZL+z+lK pP3oWBBFhn8IlfZL24aiYrygkQ/imBiDJKpUgPeQjWkmKz7jxm2zANJC/T2pdNb4EZ CVHGVM5fphOJJE5TsqzNIrh0mocH6BQAd9OgeLl9glpvJqtbOzVc5l6anTkmkjBGAL orjU38DC6l78+DfsgCdyAifS9Pxp10K2g81+veRj/GR/HbIVhmsEZlnY/mR2clUiwP Bs4g1oXlrgGFw44NoF8Kf0ircXZzh/CxXefxlVsuRN9itfXTus7clJSiMf78dYy+jv gO25pe+uK/3/Q== Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.14.4/Submit) id p58885mP060331; Wed, 8 Jun 2011 10:08:05 +0200 (CEST) (envelope-from Alexander@Leidinger.net) X-Authentication-Warning: webmail.leidinger.net: www set sender to Alexander@Leidinger.net using -f Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Wed, 08 Jun 2011 10:08:05 +0200 Message-ID: <20110608100805.71572h6edc04klid@webmail.leidinger.net> Date: Wed, 08 Jun 2011 10:08:05 +0200 From: Alexander Leidinger To: Robert Simmons References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.6) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 692FC844015.A1514 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.023, required 6, autolearn=disabled, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1308125289.41489@TCjAGigVu4PbtEXkky1WWA X-EBL-Spam-Status: No Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 08:08:22 -0000 Quoting Robert Simmons (from Wed, 8 Jun 2011 01:29:43 -0400): > Ok, is there some way to tell gpart(8) what the LBA size is, or do I > have to calculate the offset of each partition manually? In Linux it > would be "fdisk -b 1024" for the example of SSDs or "fdisk -b 4096" > for 4K HDDs. Here is what I did to align to 4k sectors with gpart: http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/ Bye, Alexander. -- I love dogs, but I hate Chihuahuas. A Chihuahua isn't a dog. It's a rat with a thyroid problem. http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 08:13:38 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 434A71065670 for ; Wed, 8 Jun 2011 08:13:38 +0000 (UTC) (envelope-from tdb@carrick.bishnet.net) Received: from carrick.bishnet.net (carrick.bishnet.net [IPv6:2a01:348:132:1::1]) by mx1.freebsd.org (Postfix) with ESMTP id 05B768FC08 for ; Wed, 8 Jun 2011 08:13:37 +0000 (UTC) Received: from [2a01:348:132:51::10] (helo=carrick-users) by carrick.bishnet.net with esmtps (TLSv1:AES256-SHA:256) (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUDu2-0002mh-BM; Wed, 08 Jun 2011 09:14:14 +0100 Received: (from tdb@localhost) by carrick-users (8.14.4/8.14.4/Submit) id p588EDsR010702; Wed, 8 Jun 2011 09:14:14 +0100 (BST) (envelope-from tdb) Date: Wed, 8 Jun 2011 09:14:13 +0100 From: Tim Bishop To: Emil Muratov Message-ID: <20110608081413.GI81872@carrick-users.bishnet.net> References: <4DEE4654.6060404@hotplug.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DEE4654.6060404@hotplug.ru> X-PGP-Key: 0x5AE7D984, http://www.bishnet.net/tim/tim-bishnet-net.asc X-PGP-Fingerprint: 1453 086E 9376 1A50 ECF6 AE05 7DCE D659 5AE7 D984 User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 08:13:38 -0000 On Tue, Jun 07, 2011 at 07:40:04PM +0400, Emil Muratov wrote: > I used zfSnap (/usr/ports/sysutils/zfsnap) as a very good alternative > for quite a long time, but now it's broken for v28 since zfs -r behavior > for snapshots has changed. Hope there will be a fix for it someday. A fix went in a few days ago. You should use the -zpool28fix flag when running zfsnap. See this page for more information: http://wiki.bsdroot.lv/zfsnap#zpool_v28_zfs_destroy_-r_bug Tim. -- Tim Bishop http://www.bishnet.net/tim/ PGP Key: 0x5AE7D984 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 12:30:14 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 52DF7106566B; Wed, 8 Jun 2011 12:30:14 +0000 (UTC) (envelope-from to.my.trociny@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id A31738FC0C; Wed, 8 Jun 2011 12:30:13 +0000 (UTC) Received: by fxm11 with SMTP id 11so412331fxm.13 for ; Wed, 08 Jun 2011 05:30:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:references:x-comment-to :sender:date:in-reply-to:message-id:user-agent:mime-version :content-type; bh=jM5TBuXo1vf8ha0SGMRU8S+ePfy7PhjPyRoOFxXhya4=; b=rYp13DhEjZPFOqtzXDzTKCZKRgpiyK8WOL0+GIyLv1MQO4XZfKMCarVRWyvp87q1/P VLov4uE4zESRUzeUaXNd3UC9QDv9g8ZKKLKdS/7ZFRKwto3hfkHqKmkBnzmgfc3dkokj Q1xG5Kq2yL6OtckCelG31OReYD46P9tswrHQM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:references:x-comment-to:sender:date:in-reply-to :message-id:user-agent:mime-version:content-type; b=IEjpyL3OzXMSwIvgLHcBXzAXPmAMaehi6KE4GVXAvXXAEerqNGhNRuENttJLlTXOYo N7n0C5AgJn/4jqZW+QTBptaYyKhBnKl1kTTe7P/lmKKXZ1054bxmTpO7qqla6HlkR5Ps eLDHswqjarCCni/Tt0xcbn2id6ejfKmifav2o= Received: by 10.223.17.141 with SMTP id s13mr3977892faa.23.1307536212647; Wed, 08 Jun 2011 05:30:12 -0700 (PDT) Received: from localhost ([95.69.172.154]) by mx.google.com with ESMTPS id l26sm210749fam.45.2011.06.08.05.30.10 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 08 Jun 2011 05:30:11 -0700 (PDT) From: Mikolaj Golub To: Yurius Radomskyi References: X-Comment-To: Yurius Radomskyi Sender: Mikolaj Golub Date: Wed, 08 Jun 2011 15:30:09 +0300 In-Reply-To: (Yurius Radomskyi's message of "Thu, 2 Jun 2011 11:47:26 +0300") Message-ID: <86r574ijzi.fsf@kopusha.home.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: hast syncronization speed issue X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 12:30:14 -0000 On Thu, 2 Jun 2011 11:47:26 +0300 Yurius Radomskyi wrote: YR> Hi, YR> I have a HAST device set up between two systems. I experience very low YR> speed with dirty blocks synchronization after split-brain condition YR> been recovered: it's 200KB/s average on 1Gbit link. On the other side, YR> when i copy a big file to the zfs partition that is created on top of YR> the hast device the synchronization speed between the host is 50MB/s YR> (wich is not too high for 1Gbit link, but acceptable.) Could you please try the patch (the kernel needs rebuilding)? http://people.freebsd.org/~trociny/uipc_socket.c.patch The patch was committed to current (r222454) and is going to be MFCed after some time. -- Mikolaj Golub From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 14:56:03 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7DEC2106566B for ; Wed, 8 Jun 2011 14:56:03 +0000 (UTC) (envelope-from feld@feld.me) Received: from mwi1.coffeenet.org (mwi1.coffeenet.org [66.170.3.2]) by mx1.freebsd.org (Postfix) with ESMTP id 302CE8FC0A for ; Wed, 8 Jun 2011 14:56:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=feld.me; s=blargle; h=In-Reply-To:Message-Id:Date:From:Content-Transfer-Encoding:Mime-Version:References:Subject:To:Content-Type; bh=SmyJT7YNXI8m95YTNli7ElQ7nO+aRrCnaEej9NajpGs=; b=f4nqZTF61ZUdFWOXjORs3VH2BsCfBOR3UsJU8T9UcW4cjO/UtzxCJYmaSdUfoSeFCLa85Ee73HohYawciACHCEN+tXFeLGvDlE9sbxG/DRlBJKl6fK+xg9+uzqDLkJ2Z; Received: from localhost ([127.0.0.1] helo=mwi1.coffeenet.org) by mwi1.coffeenet.org with esmtp (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUKB9-00064O-8s for freebsd-fs@freebsd.org; Wed, 08 Jun 2011 09:56:19 -0500 Received: from feld@feld.me by mwi1.coffeenet.org (Archiveopteryx 3.1.3) with esmtpsa id 1307544973-36980-36979/7/6; Wed, 8 Jun 2011 14:56:13 +0000 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org References: <20110608100805.71572h6edc04klid@webmail.leidinger.net> Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable From: Mark Felder Date: Wed, 8 Jun 2011 09:54:41 -0500 Message-Id: In-Reply-To: <20110608100805.71572h6edc04klid@webmail.leidinger.net> User-Agent: Opera Mail/11.50 (FreeBSD) Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 14:56:03 -0000 On Wed, 08 Jun 2011 03:08:05 -0500, Alexander Leidinger =20 wrote: > Here is what I did to align to 4k sectors with gpart: > http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-opti= mized-for-4k-sector-drives/ This is exactly what I do as well. Regards, Mark From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 21:29:23 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F3009106564A; Wed, 8 Jun 2011 21:29:22 +0000 (UTC) (envelope-from marius@alchemy.franken.de) Received: from alchemy.franken.de (alchemy.franken.de [194.94.249.214]) by mx1.freebsd.org (Postfix) with ESMTP id 73D698FC0A; Wed, 8 Jun 2011 21:29:22 +0000 (UTC) Received: from alchemy.franken.de (localhost [127.0.0.1]) by alchemy.franken.de (8.14.4/8.14.4/ALCHEMY.FRANKEN.DE) with ESMTP id p58LC3BI035488; Wed, 8 Jun 2011 23:12:03 +0200 (CEST) (envelope-from marius@alchemy.franken.de) Received: (from marius@localhost) by alchemy.franken.de (8.14.4/8.14.4/Submit) id p58LC30b035487; Wed, 8 Jun 2011 23:12:03 +0200 (CEST) (envelope-from marius) Date: Wed, 8 Jun 2011 23:12:03 +0200 From: Marius Strobl To: Martin Matuska , ppc@freebsd.org, sparc64@freebsd.org Message-ID: <20110608211203.GA35440@alchemy.franken.de> References: <4DECB197.8020102@FreeBSD.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DECB197.8020102@FreeBSD.org> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 21:29:23 -0000 On Mon, Jun 06, 2011 at 12:53:11PM +0200, Martin Matuska wrote: > Hi, > > I have merged ZFS version 28 to 8-STABLE (revision 222741) > > New major features: > > - data deduplication > - triple parity RAIDZ (RAIDZ3) > - zfs diff > - zpool split > - snapshot holds > - zpool import -F. Allows to rewind corrupted pool to earlier > transaction group > - possibility to import pool in read-only mode > > For updating, there is a compatibility layer so that in the update phase > most functionality of the new zfs binaries can be used with the old > kernel module and old zfs binaries with the new kernel module. Beware that the compatibility layer is known broken on big-endian architectures, i.e. powerpc64 and sparc64. > > If upgrading your boot pool to version 28, please don't forget to read > UPDATING and properly update your boot code. > > Thanks to everyone working on the ZFS port, especially to > Pawel Jakub Dawidek (pjd) for doing most of the work! > Marius From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 22:34:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 87EDE106566C for ; Wed, 8 Jun 2011 22:34:08 +0000 (UTC) (envelope-from rsimmons0@gmail.com) Received: from mail-yi0-f54.google.com (mail-yi0-f54.google.com [209.85.218.54]) by mx1.freebsd.org (Postfix) with ESMTP id 42B588FC08 for ; Wed, 8 Jun 2011 22:34:07 +0000 (UTC) Received: by yie13 with SMTP id 13so686481yie.13 for ; Wed, 08 Jun 2011 15:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type:content-transfer-encoding; bh=hhpHQiEdEcG6DU4CGxuyA9LReNcycWROrfbYfV8BbxQ=; b=tqg4hSOEhfzZE0JcyW1cybfiHCIonnjZh59+nUTW2EdEus8Adtu7prlGkDYIoyQixI eZ55nHSJR8PHK+eFFGoXKwU1PjXZlXwTa1OmHdDfMv6daA5W2a2Oy7k4lTcAUOMQrVEC puP9t5QCqjVqWR62qyZvtVISMTZ7uL0Cqabeo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=EE0KNnt1alGEKHCyQdTa6ZJouehWpkrIREinMKtkQmmqFFMld1CVTyT1BY9zcbQblF elk4AMVeErEme4RxAdQyrhVby7venUOyxwgdyhsq2KfEWNSsHdVnso1SMIBXpUO4ww4b /j/O+AbVh7003I6OnVcNfBjBeD2As35bdYPuA= MIME-Version: 1.0 Received: by 10.100.51.8 with SMTP id y8mr1424788any.111.1307572447290; Wed, 08 Jun 2011 15:34:07 -0700 (PDT) Received: by 10.100.243.35 with HTTP; Wed, 8 Jun 2011 15:34:07 -0700 (PDT) In-Reply-To: <20110608075526.GA85577@icarus.home.lan> References: <20110608075526.GA85577@icarus.home.lan> Date: Wed, 8 Jun 2011 18:34:07 -0400 Message-ID: From: Robert Simmons To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 22:34:08 -0000 On Wed, Jun 8, 2011 at 3:55 AM, Jeremy Chadwick wrote: > On Wed, Jun 08, 2011 at 01:29:43AM -0400, Robert Simmons wrote: >> On Wed, Jun 8, 2011 at 12:29 AM, Mark Felder wrote: >> > On Tue, 07 Jun 2011 22:27:24 -0500, Robert Simmons >> > wrote: >> >> Do all HDDs that have 4KB per LBA present themselves to the OS as >> >> having 512 bytes per LBA? >> > >> > No >> >> Ok, but can I assume that all HDDs of this type expand each of the 4K >> sectors so that physically they take up the same space as eight 512 >> byte LBAs? =A0AFAIK, the new 4K LBA has a smaller ECC area than the sum >> of 8 ECC areas in 512 byte LBAs, so if the data area was _not_ >> expanded slightly, you would never really be aligned except every x >> LBAs as the shifting approaches an LBA boundary, right? >> >> For any HDDs, do I need to worry about cylinder boundaries at all? >> Has the reported "disk geometry" become divorced from the physical >> reality in modern disks? =A0If I do still need to worry about cylinder >> boundaries, should I basically ignore every reported geometry (BIOS, >> OS) and use what is written on the sticker on the drive? >> >> >> What about SSDs that have 1024 bytes per LBA? >> > >> > Not sure, but I do know that not all flash media have the same bytes p= er LBA >> > internally. Some are 1K, some 4K, some even 8K. GPT is definitely the = way to >> > go if you want to make sure you're aligned. >> >> Ok, is there some way to tell gpart(8) what the LBA size is, or do I >> have to calculate the offset of each partition manually? =A0In Linux it >> would be "fdisk -b 1024" for the example of SSDs or "fdisk -b 4096" >> for 4K HDDs. > > I would think you'd just use "gpart -b" to specify the base offset. > For example, on an Intel 320-series SSD (which uses a NAND flash cell > size of 8192 bytes), "gpart -b 8" should end up at byte 65536 within the > flash itself. > > I'm not sure if using 8 is correct though -- that is to say, I believe > there is other space near the beginning of the drive which is used for > things like the boot loader (I don't mean boot0, I mean boot2/loader and > friends), or for the GPT loader or GPT ZFS loader. =A0I could be wrong on > this part -- need someone to correct me. =A0All these different loaders > and GPT support on FreeBSD seriously makes my head spin. > > Anyway back to SSDs: > > I have yet to see anyone list off all the *actual* NAND flash cell sizes > of SSDs. =A0For example, everyone said "4KBytes" for Intel SSDs, but come > to find out it's actually 8KBytes. > > Don't confuse NAND flash cell size with NAND erase page size. =A0They're > two different things. =A0Multiple cells make up (fit into) a single erase > page. =A0The alignment issue only applies to the cell part, not the erase > page size. =A0(I just had a discussion with an end-user on Intel's forum > about this; someone had lead him to believe the erase page size was what > he should align to). > > For example, on Intel 320-series drives, the NAND erase page size is 256 > cells, thus 256*8192 =3D 2097152, or 2MBytes. =A0Just a technical FYI bit > for those curious. Thanks for the info. I have an OCZ vertex 2 drive. After looking on the OCZ forums and reading reams and reams of conflicting and uninformed posts, I decided to just call OCZ's support. I asked what the flash cell size is, and what the erase page size is (the guy I talked to didn't know much of what I was talking about and had to go talk to a superior). He came back and said that he was told that he's not allowed to give out that information, but that as long as I begin the first partition at LBA 64 everything will align properly. And that subsequent partitions should start at an LBA divisible by 64. So, if I do that, won't it also be aligned properly if the flash cell size is 4K or 8K, since 32K (LBA 64) is divisible by 4 & 8? Does a 32K flash cell size sound like nonsense? Could that be the erase page size for this drive? From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 22:44:31 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5085F106566B for ; Wed, 8 Jun 2011 22:44:31 +0000 (UTC) (envelope-from feld@feld.me) Received: from mwi1.coffeenet.org (mwi1.coffeenet.org [66.170.3.2]) by mx1.freebsd.org (Postfix) with ESMTP id 27B388FC08 for ; Wed, 8 Jun 2011 22:44:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=feld.me; s=blargle; h=In-Reply-To:Message-Id:From:Mime-Version:Date:References:Subject:To:Content-Type; bh=5g83oUex8jPlGQAkO+BAZ1+Jetu6bMAbKQWf0/+huIo=; b=DeFFL4wKwgmo7pOoSqjzlU7JvLyBuXCMvS+jS0wMfwk/2PZh6NcGJXeBrwI51Go5qnssz5w6O74syYiOqPDbVjOtQD5PyeQLdJQDKBnDyFOVS/xXdHI5Imaz45swVdDf; Received: from localhost ([127.0.0.1] helo=mwi1.coffeenet.org) by mwi1.coffeenet.org with esmtp (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QURUV-000N1R-AM for freebsd-fs@freebsd.org; Wed, 08 Jun 2011 17:44:47 -0500 Received: from feld@feld.me by mwi1.coffeenet.org (Archiveopteryx 3.1.3) with esmtpsa id 1307573081-47978-47977/7/1; Wed, 8 Jun 2011 22:44:41 +0000 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org References: <20110608075526.GA85577@icarus.home.lan> Date: Wed, 8 Jun 2011 17:43:04 -0500 Mime-Version: 1.0 From: Mark Felder Message-Id: In-Reply-To: User-Agent: Opera Mail/11.11 (FreeBSD) Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 22:44:31 -0000 On Wed, 08 Jun 2011 17:34:07 -0500, Robert Simmons wrote: > He came back and said that he was told that he's > not allowed to give out that information, but that as long as I begin > the first partition at LBA 64 everything will align properly. I know who I won't be buying from any time soon.... From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 22:44:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 18844106566C for ; Wed, 8 Jun 2011 22:44:44 +0000 (UTC) (envelope-from mirror176@hotmail.com) Received: from snt0-omc4-s43.snt0.hotmail.com (snt0-omc4-s43.snt0.hotmail.com [65.54.51.94]) by mx1.freebsd.org (Postfix) with ESMTP id E01668FC08 for ; Wed, 8 Jun 2011 22:44:43 +0000 (UTC) Received: from SNT105-W34 ([65.55.90.201]) by snt0-omc4-s43.snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 8 Jun 2011 15:32:43 -0700 Message-ID: X-Originating-IP: [24.56.42.84] From: Edward Sutton To: Date: Wed, 8 Jun 2011 15:32:42 -0700 Importance: Normal MIME-Version: 1.0 X-OriginalArrivalTime: 08 Jun 2011 22:32:43.0074 (UTC) FILETIME=[FB6F8220:01CC262B] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: zfs mirror: 1 disk lost, corrupted other disk. crashes zfs tools and panics system X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 22:44:44 -0000 As a disk started clicking but running slow=2C then died out=2C trying to r= eboot the system ended up in a crash early in the boot sequence=3B not sure= if it was zfs related or not.=20 I managed to get back most data from the pool. It ends up with problems for= some filesystems created under /var. Created per instructions on http://wi= ki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror I then cloned the disk for further & safer play which appears as: pool: zroot state: FAULTED status: An intent log record could not be read. Waiting for adminstrator intervention to fix the faulted pool. action: Either restore the affected device(s) and run 'zpool online'=2C or ignore the intent log records by running 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-K4 scrub: none requested config: NAME STATE READ WRITE CKSUM zroot FAULTED 0 0 0 bad intent log mirror DEGRADED 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 UNAVAIL 0 0 0 cannot open Going by my memory here: It fails to boot with "ROOT MOUNT ERROR". I had lu= ck with `zpool clear=2C removing disk1=2C and mounting all pool partitions.= Data could be copied off but some filesystems were missing and attempts to= list all snapshots segfaults. Every export/import requires a zpool clear. = Attempting a scrub lead to a panic on every import on the pool thereafter. = Tried a -current snapshot (FreeBSD-9.0-CURRENT-201105-amd64-dvd1.iso) livec= d mode with similar panic results to 8.2-release. Seems the disk already imported after last restore from its dd backup. Step= s I heard of to get to more data involve using zdb to locate IDs (from -ddd= d?) but more than just `zdb` ends in an error such as the following run. zdb zroot version=3D13 name=3D'zroot' state=3D0 txg=3D2181411 pool_guid=3D8313451715893809405 hostid=3D1510368511 hostname=3D'' vdev_tree type=3D'root' id=3D0 guid=3D8313451715893809405 children[0] type=3D'mirror' id=3D0 guid=3D14331193696332474730 whole_disk=3D0 metaslab_array=3D23 metaslab_shift=3D31 ashift=3D9 asize=3D1996098895872 is_log=3D0 children[0] type=3D'disk' id=3D0 guid=3D2590809366000450414 path=3D'/dev/gpt/disk0' whole_disk=3D0 DTL=3D29 children[1] type=3D'disk' id=3D1 guid=3D13294596440516983442 path=3D'/dev/gpt/disk1' whole_disk=3D0 DTL=3D631 Assertion failed: (doi.doi_type =3D=3D DMU_OT_DSL_DIR (0x8 =3D=3D 0xc))=2C = file /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/c= ommon/fs/zfs/dsl_dir.c=2C line 93. Abort (core dumped) Any suggestions to try to get the data back? If I can provide any useful de= tails to get panics eliminated=2C I'd be glad to provide them but do not kn= ow how to start. I do have some crash dumps from may 25-31. Not sure which = of the dumps contain useful data now or that they are complete=3B attempted= scrub+dump on boot was going very slow so I used ctrl + C at least once. = From owner-freebsd-fs@FreeBSD.ORG Wed Jun 8 23:08:47 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CF33B1065670 for ; Wed, 8 Jun 2011 23:08:47 +0000 (UTC) (envelope-from rsimmons0@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id 8A9098FC12 for ; Wed, 8 Jun 2011 23:08:47 +0000 (UTC) Received: by gxk28 with SMTP id 28so650573gxk.13 for ; Wed, 08 Jun 2011 16:08:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=7j/uZAJZ0z4bs3ttrA4bFY6EJB9Zk32wanVg5cA7luE=; b=SPo2gJwFTBzqVxdvHSrjfyv34G9NyC19G88arg5iFJeT8ShHX9vdHVCtCpjXVkjsP2 1JYj+X1hpKX7Ag7P9TkBeZjoVbKsqf0Phra7m+y8zPArwOknYRVGBvLl7FmnI3h3t9a2 qppjKoH4np5cFaJwx6qvfSbMtctFIQtJuU1aw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=xDKurOFFuj3bGJmkfx35r+O/v0MpOBzELP149p7BuEdmCK1EUoGWY9QSanJdH5x8z+ 7+IyxrXe0qVasFdxfAEr7sfmK6pjvPIRwHKZOGd9dAUMTLSwEIFJBRnd7X4iJGFTsKQd xNiJat6+rhTubKjNpUqLB+67gE6OEt5gBJyMo= MIME-Version: 1.0 Received: by 10.100.87.4 with SMTP id k4mr27413anb.67.1307574526721; Wed, 08 Jun 2011 16:08:46 -0700 (PDT) Received: by 10.100.243.35 with HTTP; Wed, 8 Jun 2011 16:08:46 -0700 (PDT) In-Reply-To: References: <20110608075526.GA85577@icarus.home.lan> Date: Wed, 8 Jun 2011 19:08:46 -0400 Message-ID: From: Robert Simmons To: Mark Felder Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2011 23:08:47 -0000 On Wed, Jun 8, 2011 at 6:43 PM, Mark Felder wrote: > I know who I won't be buying from any time soon.... Too late for me, but again: if I align the first partition to 64 and make sure that all subsequent partitions are aligned to a sector divisible by 64, I will kill all birds with one stone, no? From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 02:03:04 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 821) id CE6D91065672; Thu, 9 Jun 2011 02:03:04 +0000 (UTC) Date: Thu, 9 Jun 2011 02:03:04 +0000 From: John To: freebsd-fs@freebsd.org Message-ID: <20110609020304.GA3986@FreeBSD.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.2.1i Subject: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 02:03:04 -0000 Hi, We've been running some stress tests of the new nfs server. The system is at r222531 (head), 9 clients, two mounts each to the server: mount_nfs -o udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 ${servera}:/vol/datsrc /c/$servera/vol/datsrc mount_nfs -o udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 ${servera}:/vol/datgen /c/$servera/vol/datgen The system is still up & responsive, simply no nfs services are working. All (200) threads appear to be active, but not doing anything. The debugger is not compiled into this kernel. We can run any other tracing commands desired. We can also rebuild the kernel with the debugger enabled for any kernel debugging needed. While things are running correctly, sysctl & top will for instance show the following for nfsd (threads collapsed): vfs.nfsd.minthreads: 4 vfs.nfsd.maxthreads: 200 vfs.nfsd.threads: 60 vfs.nfsrv.minthreads: 1 vfs.nfsrv.maxthreads: 200 vfs.nfsrv.threads: 0 last pid: 35073; load averages: 6.74, 4.94, 4.56 up 6+22:17:25 16:16:25 111 processes: 13 running, 98 sleeping Mem: 18M Active, 1048M Inact, 64G Wired, 8652K Cache, 9837M Buf, 28G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 2049 root 61 49 0 10052K 1608K CPU2 0 49:43 1116.70% nfsd Please let us know what we can do to help debug this. Thanks! John The output of the following commands is below: uname -a top -d 1 -b head -n 7 /usr/src/.svn/entries sysctl -a | grep nfsd sysctl -a | grep nfs | grep -v nfsd nfsstat -sW ps -auxww netstat -i # All nfs data traffic is via 10G chelsio cards. Amusing thing to note is the negative numbers in the nfsstat output :-) FreeBSD bb99za2a.unx.sas.com 9.0-CURRENT FreeBSD 9.0-CURRENT #6: Wed Jun 1 14:50:21 EDT 2011 maint1@bb99za2a.unx.sas.com:/usr/obj/usr/src/sys/ZFS amd64 last pid: 53625; load averages: 0.15, 0.07, 0.02 up 7+22:02:05 16:01:05 251 processes: 1 running, 250 sleeping Mem: 3584K Active, 1066M Inact, 87G Wired, 5844K Cache, 9837M Buf, 5426M Free Swap: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 2049 root 200 52 0 10052K 3472K nfsrc 1 102:27 0.00% nfsd 22696 root 1 20 0 18660K 1260K select 0 2:21 0.00% bwm-ng 2373 maint1 1 20 0 68140K 3776K select 1 0:29 0.00% sshd 22683 root 1 20 0 12184K 736K select 6 0:13 0.00% rlogind 16215 maint1 1 20 0 68140K 3296K select 11 0:08 0.00% sshd 2219 root 1 20 0 20508K 1732K select 6 0:05 0.00% sendmail 2230 root 1 20 0 14260K 672K nanslp 6 0:02 0.00% cron 1919 root 1 20 0 12312K 680K select 8 0:02 0.00% syslogd 1680 root 1 20 0 6276K 360K select 2 0:01 0.00% devd 2039 root 1 20 0 12308K 728K select 8 0:01 0.00% mountd 1943 root 1 20 0 14392K 724K select 0 0:00 0.00% rpcbind 2448 maint1 1 20 0 68140K 2200K select 3 0:00 0.00% sshd 2223 smmsp 1 20 0 20508K 1388K pause 3 0:00 0.00% sendmail 16220 root 1 20 0 17664K 3004K pause 1 0:00 0.00% csh 2378 root 1 20 0 17664K 1376K ttyin 2 0:00 0.00% csh 16219 maint1 1 27 0 41428K 1176K wait 1 0:00 0.00% su 2283 root 1 20 0 16344K 644K select 7 0:00 0.00% inetd 17046 root 1 20 0 17664K 2076K ttyin 7 0:00 0.00% csh 10 dir 222531 svn://svn.freebsd.org/base/head svn://svn.freebsd.org/base kern.features.nfsd: 1 vfs.nfsd.server_max_nfsvers: 4 vfs.nfsd.server_min_nfsvers: 2 vfs.nfsd.nfs_privport: 0 vfs.nfsd.enable_locallocks: 0 vfs.nfsd.issue_delegations: 0 vfs.nfsd.commit_miss: 0 vfs.nfsd.commit_blks: 17396119 vfs.nfsd.mirrormnt: 1 vfs.nfsd.minthreads: 4 vfs.nfsd.maxthreads: 200 vfs.nfsd.threads: 200 vfs.nfsd.request_space_used: 632932 vfs.nfsd.request_space_used_highest: 1044128 vfs.nfsd.request_space_high: 47185920 vfs.nfsd.request_space_low: 31457280 vfs.nfsd.request_space_throttled: 0 vfs.nfsd.request_space_throttle_count: 0 vfs.nfsrv.fha.max_nfsds_per_fh: 8 vfs.nfsrv.fha.max_reqs_per_nfsd: 4 kern.features.nfscl: 1 kern.features.nfsserver: 1 vfs.nfs.downdelayinitial: 12 vfs.nfs.downdelayinterval: 30 vfs.nfs.keytab_enctype: 1 vfs.nfs.skip_wcc_data_onerr: 1 vfs.nfs.nfs3_jukebox_delay: 10 vfs.nfs.reconnects: 0 vfs.nfs.bufpackets: 4 vfs.nfs.callback_addr: vfs.nfs.realign_count: 0 vfs.nfs.realign_test: 0 vfs.nfs.nfs_directio_allow_mmap: 1 vfs.nfs.nfs_directio_enable: 0 vfs.nfs.clean_pages_on_close: 1 vfs.nfs.commit_on_close: 0 vfs.nfs.prime_access_cache: 0 vfs.nfs.access_cache_timeout: 60 vfs.nfs.diskless_rootpath: vfs.nfs.diskless_valid: 0 vfs.nfs.nfs_ip_paranoia: 1 vfs.nfs.defect: 0 vfs.nfs.iodmax: 20 vfs.nfs.iodmin: 0 vfs.nfs.iodmaxidle: 120 vfs.acl_nfs4_old_semantics: 0 vfs.nfs_common.realign_count: 0 vfs.nfs_common.realign_test: 0 vfs.nfsrv.nfs_privport: 0 vfs.nfsrv.fha.bin_shift: 18 vfs.nfsrv.fha.fhe_stats: No file handle entries. vfs.nfsrv.commit_miss: 0 vfs.nfsrv.commit_blks: 0 vfs.nfsrv.async: 0 vfs.nfsrv.gatherdelay_v3: 0 vfs.nfsrv.gatherdelay: 10000 vfs.nfsrv.minthreads: 1 vfs.nfsrv.maxthreads: 200 vfs.nfsrv.threads: 0 vfs.nfsrv.request_space_used: 0 vfs.nfsrv.request_space_used_highest: 0 vfs.nfsrv.request_space_high: 47185920 vfs.nfsrv.request_space_low: 31457280 vfs.nfsrv.request_space_throttled: 0 vfs.nfsrv.request_space_throttle_count: 0 Server Info: Getattr Setattr Lookup Readlink Read Write Create Remove 0 0 4859875 16546194 0 0 0 0 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 0 -1523364522 0 990131252 0 0 0 0 Mknod Fsstat Fsinfo PathConf Commit 0 0 0 0 0 Server Ret-Failed 0 Server Faults 0 Server Cache Stats: Inprog Idem Non-idem Misses 189710 0 154619 -14704992 Server Write Gathering: WriteOps WriteRPC Opsaved 0 0 0 USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 11 1180.6 0.0 0 192 ?? RL 1Jun11 130918:59.20 [idle] root 0 0.0 0.0 0 5488 ?? DLs 1Jun11 476:54.70 [kernel] root 1 0.0 0.0 6276 136 ?? ILs 1Jun11 0:00.03 /sbin/init -- root 2 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify0] root 3 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify1] root 4 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify2] root 5 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [sctp_iterator] root 6 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [xpt_thrd] root 7 0.0 0.0 0 16 ?? DL 1Jun11 0:12.17 [g_mp_kt] root 8 0.0 0.0 0 16 ?? DL 1Jun11 0:22.25 [pagedaemon] root 9 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [vmdaemon] root 10 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [audit] root 12 0.0 0.0 0 656 ?? WL 1Jun11 208:26.93 [intr] root 13 0.0 0.0 0 48 ?? DL 1Jun11 35:45.18 [geom] root 14 0.0 0.0 0 16 ?? DL 1Jun11 2:29.63 [yarrow] root 15 0.0 0.0 0 384 ?? DL 1Jun11 0:12.44 [usb] root 16 0.0 0.0 0 16 ?? DL 1Jun11 0:02.43 [acpi_thermal] root 17 0.0 0.0 0 16 ?? DL 1Jun11 0:00.25 [acpi_cooling0] root 18 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [pagezero] root 19 0.0 0.0 0 16 ?? DL 1Jun11 0:01.48 [bufdaemon] root 20 0.0 0.0 0 16 ?? DL 1Jun11 51:24.22 [syncer] root 21 0.0 0.0 0 16 ?? DL 1Jun11 0:02.15 [vnlru] root 22 0.0 0.0 0 16 ?? DL 1Jun11 0:31.61 [softdepflush] root 1624 0.0 0.0 14364 324 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p /dev/ums0 -t auto -I /var/run/moused.ums0.pid root 1648 0.0 0.0 14364 512 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p /dev/ums1 -t auto -I /var/run/moused.ums1.pid root 1680 0.0 0.0 6276 360 ?? Is 1Jun11 0:00.90 /sbin/devd root 1919 0.0 0.0 12312 680 ?? Is 1Jun11 0:01.56 /usr/sbin/syslogd -s root 1943 0.0 0.0 14392 724 ?? Is 1Jun11 0:00.32 /usr/sbin/rpcbind root 2039 0.0 0.0 12308 728 ?? Is 1Jun11 0:00.58 /usr/sbin/mountd /etc/exports /etc/zfs/exports root 2048 0.0 0.0 10052 340 ?? Is 1Jun11 0:00.02 nfsd: master (nfsd) root 2049 0.0 0.0 10052 3472 ?? D 1Jun11 4953:44.73 nfsd: server (nfsd) root 2211 0.0 0.0 47000 1600 ?? Is 1Jun11 0:00.00 /usr/sbin/sshd root 2219 0.0 0.0 20508 1732 ?? Ss 1Jun11 0:05.04 sendmail: accepting connections (sendmail) smmsp 2223 0.0 0.0 20508 1388 ?? Is 1Jun11 0:00.12 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail) root 2230 0.0 0.0 14260 672 ?? Ss 1Jun11 0:02.44 /usr/sbin/cron -s root 2283 0.0 0.0 16344 644 ?? Is 1Jun11 0:00.03 /usr/sbin/inetd -wW -C 60 root 2371 0.0 0.0 68140 1444 ?? Is 1Jun11 0:00.02 sshd: maint1 [priv] (sshd) maint1 2373 0.0 0.0 68140 3776 ?? I 1Jun11 0:29.10 sshd: maint1@pts/0 (sshd) root 2383 0.0 0.0 0 128 ?? DL 1Jun11 60:18.89 [zfskern] root 2446 0.0 0.0 68140 1460 ?? Is 1Jun11 0:00.01 sshd: maint1 [priv] (sshd) maint1 2448 0.0 0.0 68140 2200 ?? I 1Jun11 0:00.25 sshd: maint1@pts/2 (sshd) root 16213 0.0 0.0 68140 2900 ?? Is Thu04PM 0:00.01 sshd: maint1 [priv] (sshd) maint1 16215 0.0 0.0 68140 3296 ?? S Thu04PM 0:07.96 sshd: maint1@pts/1 (sshd) root 22683 0.0 0.0 12184 736 ?? Ss Sat05PM 0:13.37 rlogind root 33240 0.0 0.0 68140 2740 ?? Is Wed12PM 0:00.01 sshd: maint1 [priv] (sshd) maint1 33242 0.0 0.0 68140 2780 ?? I Wed12PM 0:00.00 sshd: maint1@pts/4 (sshd) root 33279 0.0 0.0 0 16 ?? DL Wed12PM 36:13.14 [fct0-worker] root 33281 0.0 0.0 0 16 ?? DL Wed12PM 2:09.48 [fct1-worker] root 33283 0.0 0.0 0 16 ?? DL Wed12PM 2:05.68 [fioa-data-groom] root 33284 0.0 0.0 0 16 ?? DL Wed12PM 10:48.29 [fio0-bio-submit] root 33285 0.0 0.0 0 16 ?? DL Wed12PM 0:27.01 [fiob-data-groom] root 33286 0.0 0.0 0 16 ?? DL Wed12PM 0:03.72 [fio1-bio-submit] root 33689 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md0] root 33691 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md1] root 33693 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md2] root 33695 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md3] root 35749 0.0 0.0 12184 572 ?? Is 5:05PM 0:00.01 rlogind root 52810 0.0 0.0 12184 724 ?? Is 1:18PM 0:00.00 rlogind root 2326 0.0 0.0 41300 984 v0 Is 1Jun11 0:00.01 login [pam] (login) root 34215 0.0 0.0 17664 2076 v0 I+ Wed01PM 0:00.01 -csh (csh) root 2327 0.0 0.0 12184 300 v1 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv1 root 2328 0.0 0.0 12184 300 v2 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv2 root 2329 0.0 0.0 12184 300 v3 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv3 root 2330 0.0 0.0 12184 300 v4 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv4 root 2331 0.0 0.0 12184 300 v5 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv5 root 2332 0.0 0.0 12184 300 v6 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv6 root 2333 0.0 0.0 12184 300 v7 Is+ 1Jun11 0:00.00 /usr/libexec/getty Pc ttyv7 maint1 2374 0.0 0.0 14636 384 0 Is 1Jun11 0:00.00 -sh (sh) root 2377 0.0 0.0 41428 568 0 I 1Jun11 0:00.00 su root 2378 0.0 0.0 17664 1376 0 I+ 1Jun11 0:00.04 _su (csh) maint1 16216 0.0 0.0 14636 888 1 Is Thu04PM 0:00.00 -sh (sh) root 16219 0.0 0.0 41428 1176 1 I Thu04PM 0:00.04 su root 16220 0.0 0.0 17664 3004 1 S Thu04PM 0:00.09 _su (csh) root 53623 0.0 0.0 14636 1640 1 S+ 4:01PM 0:00.00 /bin/sh ./nfsdebug.sh root 53633 0.0 0.0 14328 1304 1 R+ 4:01PM 0:00.00 ps -auxww maint1 2449 0.0 0.0 14636 636 2 Is 1Jun11 0:00.01 -sh (sh) root 17045 0.0 0.0 41428 1172 2 I Thu05PM 0:00.00 su root 17046 0.0 0.0 17664 2076 2 I+ Thu05PM 0:00.03 _su (csh) root 22684 0.0 0.0 41428 1240 3 Is Sat05PM 0:00.00 login [pam] (login) root 22685 0.0 0.0 17664 1420 3 I Sat05PM 0:00.02 -csh (csh) root 22696 0.0 0.0 18660 1260 3 S+ Sat05PM 2:20.85 bwm-ng maint1 33243 0.0 0.0 14636 880 4 Is+ Wed12PM 0:00.00 -sh (sh) root 35750 0.0 0.0 41428 984 5 Is 5:05PM 0:00.00 login [pam] (login) root 35751 0.0 0.0 17664 1320 5 I+ 5:05PM 0:00.01 -csh (csh) root 52811 0.0 0.0 41428 1152 6 Is 1:18PM 0:00.00 login [pam] (login) root 52812 0.0 0.0 17664 1820 6 I+ 1:18PM 0:00.01 -csh (csh) # netstat -i Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll bce0 1500 00:10:18:8d:d0:a4 18340277 26 0 2512640 0 0 bce0 1500 10.24.0.0 bb99za2a 12939843 - - 2511543 - - bce0 1500 fe80::210:18f fe80::210:18ff:fe 0 - - 3 - - bce1* 1500 00:10:18:8d:d0:a6 0 0 0 0 0 0 cxgb0 9000 00:07:43:07:33:f8 4464851870 0 0 4378199683 0 0 cxgb0 9000 172.21.21.0 172.21.21.83 4464472961 - - 4378064187 - - cxgb0 9000 fe80::207:43f fe80::207:43ff:fe 0 - - 3 - - cxgb1 1500 00:07:43:07:33:f9 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 usbus 0 0 0 0 0 0 0 lo0 16384 701 0 0 701 0 0 lo0 16384 your-net localhost 645 - - 645 - - lo0 16384 localhost ::1 56 - - 56 - - lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - - From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 02:40:41 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B0A231065674; Thu, 9 Jun 2011 02:40:41 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8874A8FC20; Thu, 9 Jun 2011 02:40:41 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p592efPA021840; Thu, 9 Jun 2011 02:40:41 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p592efaw021828; Thu, 9 Jun 2011 02:40:41 GMT (envelope-from linimon) Date: Thu, 9 Jun 2011 02:40:41 GMT Message-Id: <201106090240.p592efaw021828@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/157684: [nfs] NFSv4 ignoring "-ro" option in exports file X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 02:40:41 -0000 Old Synopsis: NFSv4 ignoring "-ro" option in exports file New Synopsis: [nfs] NFSv4 ignoring "-ro" option in exports file Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Jun 9 02:40:15 UTC 2011 Responsible-Changed-Why: Fix synopsis and assign. http://www.freebsd.org/cgi/query-pr.cgi?pr=157684 From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 09:41:25 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CF32106566C for ; Thu, 9 Jun 2011 09:41:25 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (smtp-sofia.digsys.bg [193.68.3.230]) by mx1.freebsd.org (Postfix) with ESMTP id CF6C88FC14 for ; Thu, 9 Jun 2011 09:41:24 +0000 (UTC) Received: from dcave.digsys.bg (dcave.digsys.bg [192.92.129.5]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.14.4/8.14.4) with ESMTP id p599fEdg096304 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Thu, 9 Jun 2011 12:41:20 +0300 (EEST) (envelope-from daniel@digsys.bg) Message-ID: <4DF0953A.9030002@digsys.bg> Date: Thu, 09 Jun 2011 12:41:14 +0300 From: Daniel Kalchev User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.17) Gecko/20110519 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: zfs mirror: 1 disk lost, corrupted other disk. crashes zfs tools and panics system X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 09:41:25 -0000 On 09.06.11 01:32, Edward Sutton wrote: [...] > pool: zroot > state: FAULTED > status: An intent log record could not be read. > Waiting for adminstrator intervention to fix the faulted pool. > action: Either restore the affected device(s) and run 'zpool online', > or ignore the intent log records by running 'zpool clear'. > see: http://www.sun.com/msg/ZFS-8000-K4 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > zroot FAULTED 0 0 0 bad intent log > mirror DEGRADED 0 0 0 > gpt/disk0 ONLINE 0 0 0 > gpt/disk1 UNAVAIL 0 0 0 cannot open > > Going by my memory here: It fails to boot with "ROOT MOUNT ERROR". I had luck with `zpool clear, removing disk1, and mounting all pool partitions. Data could be copied off but some filesystems were missing and attempts to list all snapshots segfaults. Every export/import requires a zpool clear. Attempting a scrub lead to a panic on every import on the pool thereafter. Tried a -current snapshot (FreeBSD-9.0-CURRENT-201105-amd64-dvd1.iso) livecd mode with similar panic results to 8.2-release. > Perhaps you should try to use zpool import -F zroot from an ZFS v28 system, such as the FreeBSD-9.0-CURRENT you mentioned? Daniel From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 09:44:46 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 768D81065670 for ; Thu, 9 Jun 2011 09:44:46 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: from forward5.mail.yandex.net (forward5.mail.yandex.net [77.88.46.21]) by mx1.freebsd.org (Postfix) with ESMTP id 036A58FC1C for ; Thu, 9 Jun 2011 09:44:45 +0000 (UTC) Received: from smtp3.mail.yandex.net (smtp3.mail.yandex.net [77.88.46.103]) by forward5.mail.yandex.net (Yandex) with ESMTP id DDB7F1203BB2; Thu, 9 Jun 2011 13:29:04 +0400 (MSD) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1307611744; bh=LcXgw5G8YWQpUomfZLXyxVAkfxJZe0zcQ8400QAFuLI=; h=Message-ID:Date:From:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type; b=CT2h3grGJ9LC4FglGtyyMqOkuJs0RfQtTwKRvXB3w2c8oyrRwvqz5ilaPv7BgxK0O UEDhObtP4sucpeHeWVXHzKXXtat9QHXUTrd3xu5Vh+mDZvKoomt9ejm2cJ1uzxfALW I080VUw+Lk2ES9jqQn/s+x1nUfQ6lP1JZ/fx0YmY= Received: from [127.0.0.1] (ns.kirov.so-ups.ru [77.72.136.145]) by smtp3.mail.yandex.net (Yandex) with ESMTPSA id 9ACAF6980066; Thu, 9 Jun 2011 13:29:04 +0400 (MSD) Message-ID: <4DF0925C.5050705@yandex.ru> Date: Thu, 09 Jun 2011 13:29:00 +0400 From: "Andrey V. Elsukov" User-Agent: Mozilla Thunderbird 1.5 (FreeBSD/20051231) MIME-Version: 1.0 To: Robert Simmons References: In-Reply-To: X-Enigmail-Version: 1.1.1 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig5F660C16768F851386837BF5" X-Yandex-Spam: 1 Cc: freebsd-fs@freebsd.org Subject: Re: GPT and disk alignment X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 09:44:46 -0000 This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig5F660C16768F851386837BF5 Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: quoted-printable On 08.06.2011 7:27, Robert Simmons wrote: > Do all HDDs that have 4KB per LBA present themselves to the OS as > having 512 bytes per LBA? Recently I added automatic alignment feature to gpart(8) in head/. I plan to merge it to stable/8 in the next week. Also there were some changes in ada(4) driver and now you can add quirk for your drive to let system know about your 4k drive. The easiest way to know is your partition aligned or not - use diskinfo(8= ). Example: # diskinfo -v ada0p3 ada0p3 512 # sectorsize 75731098112 # mediasize in bytes (70G) 147912301 # mediasize in sectors 4096 # stripesize 1024 # stripeoffset 146738 # Cylinders according to firmware. 16 # Heads according to firmware. 63 # Sectors according to firmware. S0DEJ1NL817767 # Disk ident. If `echo $stripeoffset % 4096 | bc` is not zero - your partition is not aligned (where $stripeoffset is value from diskinfo(4) output). About gpart(8). The current implementation of some partitioning schemes d= oes not allow to use any offset values for partitions (MBR, EBR, VTOC8). But GPT = does allow. So examples for GPT, to align new partition you can use this command: # gpart add -t freebsd-zfs -s 10G -a 4k ada0 and gpart will try to align it to 4k boundaries. About ada(4) quirks. You can add to your loader.conf this line: kern.cam.ada.0.quirks=3D"1" After reboot your disk will report about 4k stripesize: # geom disk list ada0 Geom name: ada0 Providers: 1. Name: ada0 Mediasize: 80026361856 (74G) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r2w2e4 descr: SAMSUNG HD080HJ/P ident: S0DEJ1NL817767 fwsectors: 63 fwheads: 16 And gpart(8) will use this information for automatic alignment. But this will work only in 9.0-CURRENT. --=20 WBR, Andrey V. Elsukov --------------enig5F660C16768F851386837BF5 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (MingW32) iQEcBAEBAgAGBQJN8JJgAAoJEAHF6gQQyKF6rKwH/1MAkVlDgTXS+y+X3wzWZvGe OqCJKo7XKlgGsChoTNV/VOspyRWnCUv1nqRNIIj1NQCiHrE6oyadeJY7bqgrr+5F VsARnMtchTfqEh8uzSRZ19QRZjXKWUwx3wkuRUWRyRpenzIBqXmMeEgE7BKCzU/H 7zJXVdWZ/Fogjus1cXgTurkUnsXbTqPgWY7m9aiqB1sKrEgarbQg8jHU2REtmgpH Jn47TvdWwZpzl/vL50to+UxZ3p9BiThyz9ZbONNdmzz2oOSBg9+TZDBQK4ZZ5NDZ D7jIpzoW/Rvuay+fHoB+ZGy96+Ub65PzTrDPTRU4p+ssdjpRrJ+TmOVx/BJXpC0= =oz6p -----END PGP SIGNATURE----- --------------enig5F660C16768F851386837BF5-- From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 12:59:05 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CA47106566B for ; Thu, 9 Jun 2011 12:59:05 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id BE3128FC14 for ; Thu, 9 Jun 2011 12:59:04 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EALnC8E2DaFvO/2dsb2JhbABOAQSESaJotlGRCIErggkBgWSBCgSRKo9u X-IronPort-AV: E=Sophos;i="4.65,341,1304308800"; d="scan'208";a="123439710" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 09 Jun 2011 08:59:03 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 87342B3E96; Thu, 9 Jun 2011 08:59:03 -0400 (EDT) Date: Thu, 9 Jun 2011 08:59:03 -0400 (EDT) From: Rick Macklem To: John Message-ID: <795803957.322936.1307624343538.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20110609020304.GA3986@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 12:59:05 -0000 John De wrote: > Hi, > > We've been running some stress tests of the new nfs server. > The system is at r222531 (head), 9 clients, two mounts each > to the server: > > mount_nfs -o > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > mount_nfs -o > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > The system is still up & responsive, simply no nfs services > are working. All (200) threads appear to be active, but not > doing anything. The debugger is not compiled into this kernel. > We can run any other tracing commands desired. We can also > rebuild the kernel with the debugger enabled for any kernel > debugging needed. > > > While things are running correctly, sysctl & top will for > instance show the following for nfsd (threads collapsed): > > vfs.nfsd.minthreads: 4 > vfs.nfsd.maxthreads: 200 > vfs.nfsd.threads: 60 > vfs.nfsrv.minthreads: 1 > vfs.nfsrv.maxthreads: 200 > vfs.nfsrv.threads: 0 > last pid: 35073; load averages: 6.74, 4.94, 4.56 up 6+22:17:25 > 16:16:25 > 111 processes: 13 running, 98 sleeping > Mem: 18M Active, 1048M Inact, 64G Wired, 8652K Cache, 9837M Buf, 28G > Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > 2049 root 61 49 0 10052K 1608K CPU2 0 49:43 1116.70% nfsd > > > Please let us know what we can do to help debug this. > > Thanks! > John > > > The output of the following commands is below: > > > uname -a > top -d 1 -b > head -n 7 /usr/src/.svn/entries > sysctl -a | grep nfsd > sysctl -a | grep nfs | grep -v nfsd > nfsstat -sW > ps -auxww > netstat -i # All nfs data traffic is via 10G chelsio cards. > > > Amusing thing to note is the negative numbers in the nfsstat > output :-) > > > FreeBSD bb99za2a.unx.sas.com 9.0-CURRENT FreeBSD 9.0-CURRENT #6: Wed > Jun 1 14:50:21 EDT 2011 > maint1@bb99za2a.unx.sas.com:/usr/obj/usr/src/sys/ZFS amd64 > last pid: 53625; load averages: 0.15, 0.07, 0.02 up 7+22:02:05 > 16:01:05 > 251 processes: 1 running, 250 sleeping > > Mem: 3584K Active, 1066M Inact, 87G Wired, 5844K Cache, 9837M Buf, > 5426M Free > Swap: > > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > > > 2049 root 200 52 0 10052K 3472K nfsrc 1 102:27 0.00% nfsd > 22696 root 1 20 0 18660K 1260K select 0 2:21 0.00% bwm-ng > 2373 maint1 1 20 0 68140K 3776K select 1 0:29 0.00% sshd > 22683 root 1 20 0 12184K 736K select 6 0:13 0.00% rlogind > 16215 maint1 1 20 0 68140K 3296K select 11 0:08 0.00% sshd > 2219 root 1 20 0 20508K 1732K select 6 0:05 0.00% sendmail > 2230 root 1 20 0 14260K 672K nanslp 6 0:02 0.00% cron > 1919 root 1 20 0 12312K 680K select 8 0:02 0.00% syslogd > 1680 root 1 20 0 6276K 360K select 2 0:01 0.00% devd > 2039 root 1 20 0 12308K 728K select 8 0:01 0.00% mountd > 1943 root 1 20 0 14392K 724K select 0 0:00 0.00% rpcbind > 2448 maint1 1 20 0 68140K 2200K select 3 0:00 0.00% sshd > 2223 smmsp 1 20 0 20508K 1388K pause 3 0:00 0.00% sendmail > 16220 root 1 20 0 17664K 3004K pause 1 0:00 0.00% csh > 2378 root 1 20 0 17664K 1376K ttyin 2 0:00 0.00% csh > 16219 maint1 1 27 0 41428K 1176K wait 1 0:00 0.00% su > 2283 root 1 20 0 16344K 644K select 7 0:00 0.00% inetd > 17046 root 1 20 0 17664K 2076K ttyin 7 0:00 0.00% csh > > 10 > > dir > 222531 > svn://svn.freebsd.org/base/head > svn://svn.freebsd.org/base > > kern.features.nfsd: 1 > vfs.nfsd.server_max_nfsvers: 4 > vfs.nfsd.server_min_nfsvers: 2 > vfs.nfsd.nfs_privport: 0 > vfs.nfsd.enable_locallocks: 0 > vfs.nfsd.issue_delegations: 0 > vfs.nfsd.commit_miss: 0 > vfs.nfsd.commit_blks: 17396119 > vfs.nfsd.mirrormnt: 1 > vfs.nfsd.minthreads: 4 > vfs.nfsd.maxthreads: 200 > vfs.nfsd.threads: 200 > vfs.nfsd.request_space_used: 632932 > vfs.nfsd.request_space_used_highest: 1044128 > vfs.nfsd.request_space_high: 47185920 > vfs.nfsd.request_space_low: 31457280 > vfs.nfsd.request_space_throttled: 0 > vfs.nfsd.request_space_throttle_count: 0 > vfs.nfsrv.fha.max_nfsds_per_fh: 8 > vfs.nfsrv.fha.max_reqs_per_nfsd: 4 > kern.features.nfscl: 1 > kern.features.nfsserver: 1 > vfs.nfs.downdelayinitial: 12 > vfs.nfs.downdelayinterval: 30 > vfs.nfs.keytab_enctype: 1 > vfs.nfs.skip_wcc_data_onerr: 1 > vfs.nfs.nfs3_jukebox_delay: 10 > vfs.nfs.reconnects: 0 > vfs.nfs.bufpackets: 4 > vfs.nfs.callback_addr: > vfs.nfs.realign_count: 0 > vfs.nfs.realign_test: 0 > vfs.nfs.nfs_directio_allow_mmap: 1 > vfs.nfs.nfs_directio_enable: 0 > vfs.nfs.clean_pages_on_close: 1 > vfs.nfs.commit_on_close: 0 > vfs.nfs.prime_access_cache: 0 > vfs.nfs.access_cache_timeout: 60 > vfs.nfs.diskless_rootpath: > vfs.nfs.diskless_valid: 0 > vfs.nfs.nfs_ip_paranoia: 1 > vfs.nfs.defect: 0 > vfs.nfs.iodmax: 20 > vfs.nfs.iodmin: 0 > vfs.nfs.iodmaxidle: 120 > vfs.acl_nfs4_old_semantics: 0 > vfs.nfs_common.realign_count: 0 > vfs.nfs_common.realign_test: 0 > vfs.nfsrv.nfs_privport: 0 > vfs.nfsrv.fha.bin_shift: 18 > vfs.nfsrv.fha.fhe_stats: No file handle entries. > vfs.nfsrv.commit_miss: 0 > vfs.nfsrv.commit_blks: 0 > vfs.nfsrv.async: 0 > vfs.nfsrv.gatherdelay_v3: 0 > vfs.nfsrv.gatherdelay: 10000 > vfs.nfsrv.minthreads: 1 > vfs.nfsrv.maxthreads: 200 > vfs.nfsrv.threads: 0 > vfs.nfsrv.request_space_used: 0 > vfs.nfsrv.request_space_used_highest: 0 > vfs.nfsrv.request_space_high: 47185920 > vfs.nfsrv.request_space_low: 31457280 > vfs.nfsrv.request_space_throttled: 0 > vfs.nfsrv.request_space_throttle_count: 0 > > Server Info: > Getattr Setattr Lookup Readlink Read Write Create Remove > 0 0 4859875 16546194 0 0 0 0 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 0 -1523364522 0 990131252 0 0 0 0 > Mknod Fsstat Fsinfo PathConf Commit > 0 0 0 0 0 > Server Ret-Failed > 0 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 189710 0 154619 -14704992 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 0 0 0 > > USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND > root 11 1180.6 0.0 0 192 ?? RL 1Jun11 130918:59.20 [idle] > root 0 0.0 0.0 0 5488 ?? DLs 1Jun11 476:54.70 [kernel] > root 1 0.0 0.0 6276 136 ?? ILs 1Jun11 0:00.03 /sbin/init -- > root 2 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify0] > root 3 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify1] > root 4 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify2] > root 5 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [sctp_iterator] > root 6 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [xpt_thrd] > root 7 0.0 0.0 0 16 ?? DL 1Jun11 0:12.17 [g_mp_kt] > root 8 0.0 0.0 0 16 ?? DL 1Jun11 0:22.25 [pagedaemon] > root 9 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [vmdaemon] > root 10 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [audit] > root 12 0.0 0.0 0 656 ?? WL 1Jun11 208:26.93 [intr] > root 13 0.0 0.0 0 48 ?? DL 1Jun11 35:45.18 [geom] > root 14 0.0 0.0 0 16 ?? DL 1Jun11 2:29.63 [yarrow] > root 15 0.0 0.0 0 384 ?? DL 1Jun11 0:12.44 [usb] > root 16 0.0 0.0 0 16 ?? DL 1Jun11 0:02.43 [acpi_thermal] > root 17 0.0 0.0 0 16 ?? DL 1Jun11 0:00.25 [acpi_cooling0] > root 18 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [pagezero] > root 19 0.0 0.0 0 16 ?? DL 1Jun11 0:01.48 [bufdaemon] > root 20 0.0 0.0 0 16 ?? DL 1Jun11 51:24.22 [syncer] > root 21 0.0 0.0 0 16 ?? DL 1Jun11 0:02.15 [vnlru] > root 22 0.0 0.0 0 16 ?? DL 1Jun11 0:31.61 [softdepflush] > root 1624 0.0 0.0 14364 324 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p > /dev/ums0 -t auto -I /var/run/moused.ums0.pid > root 1648 0.0 0.0 14364 512 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p > /dev/ums1 -t auto -I /var/run/moused.ums1.pid > root 1680 0.0 0.0 6276 360 ?? Is 1Jun11 0:00.90 /sbin/devd > root 1919 0.0 0.0 12312 680 ?? Is 1Jun11 0:01.56 /usr/sbin/syslogd -s > root 1943 0.0 0.0 14392 724 ?? Is 1Jun11 0:00.32 /usr/sbin/rpcbind > root 2039 0.0 0.0 12308 728 ?? Is 1Jun11 0:00.58 /usr/sbin/mountd > /etc/exports /etc/zfs/exports > root 2048 0.0 0.0 10052 340 ?? Is 1Jun11 0:00.02 nfsd: master (nfsd) > root 2049 0.0 0.0 10052 3472 ?? D 1Jun11 4953:44.73 nfsd: server > (nfsd) > root 2211 0.0 0.0 47000 1600 ?? Is 1Jun11 0:00.00 /usr/sbin/sshd > root 2219 0.0 0.0 20508 1732 ?? Ss 1Jun11 0:05.04 sendmail: accepting > connections (sendmail) > smmsp 2223 0.0 0.0 20508 1388 ?? Is 1Jun11 0:00.12 sendmail: Queue > runner@00:30:00 for /var/spool/clientmqueue (sendmail) > root 2230 0.0 0.0 14260 672 ?? Ss 1Jun11 0:02.44 /usr/sbin/cron -s > root 2283 0.0 0.0 16344 644 ?? Is 1Jun11 0:00.03 /usr/sbin/inetd -wW > -C 60 > root 2371 0.0 0.0 68140 1444 ?? Is 1Jun11 0:00.02 sshd: maint1 [priv] > (sshd) > maint1 2373 0.0 0.0 68140 3776 ?? I 1Jun11 0:29.10 sshd: maint1@pts/0 > (sshd) > root 2383 0.0 0.0 0 128 ?? DL 1Jun11 60:18.89 [zfskern] > root 2446 0.0 0.0 68140 1460 ?? Is 1Jun11 0:00.01 sshd: maint1 [priv] > (sshd) > maint1 2448 0.0 0.0 68140 2200 ?? I 1Jun11 0:00.25 sshd: maint1@pts/2 > (sshd) > root 16213 0.0 0.0 68140 2900 ?? Is Thu04PM 0:00.01 sshd: maint1 > [priv] (sshd) > maint1 16215 0.0 0.0 68140 3296 ?? S Thu04PM 0:07.96 sshd: > maint1@pts/1 (sshd) > root 22683 0.0 0.0 12184 736 ?? Ss Sat05PM 0:13.37 rlogind > root 33240 0.0 0.0 68140 2740 ?? Is Wed12PM 0:00.01 sshd: maint1 > [priv] (sshd) > maint1 33242 0.0 0.0 68140 2780 ?? I Wed12PM 0:00.00 sshd: > maint1@pts/4 (sshd) > root 33279 0.0 0.0 0 16 ?? DL Wed12PM 36:13.14 [fct0-worker] > root 33281 0.0 0.0 0 16 ?? DL Wed12PM 2:09.48 [fct1-worker] > root 33283 0.0 0.0 0 16 ?? DL Wed12PM 2:05.68 [fioa-data-groom] > root 33284 0.0 0.0 0 16 ?? DL Wed12PM 10:48.29 [fio0-bio-submit] > root 33285 0.0 0.0 0 16 ?? DL Wed12PM 0:27.01 [fiob-data-groom] > root 33286 0.0 0.0 0 16 ?? DL Wed12PM 0:03.72 [fio1-bio-submit] > root 33689 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md0] > root 33691 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md1] > root 33693 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md2] > root 33695 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md3] > root 35749 0.0 0.0 12184 572 ?? Is 5:05PM 0:00.01 rlogind > root 52810 0.0 0.0 12184 724 ?? Is 1:18PM 0:00.00 rlogind > root 2326 0.0 0.0 41300 984 v0 Is 1Jun11 0:00.01 login [pam] (login) > root 34215 0.0 0.0 17664 2076 v0 I+ Wed01PM 0:00.01 -csh (csh) > root 2327 0.0 0.0 12184 300 v1 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv1 > root 2328 0.0 0.0 12184 300 v2 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv2 > root 2329 0.0 0.0 12184 300 v3 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv3 > root 2330 0.0 0.0 12184 300 v4 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv4 > root 2331 0.0 0.0 12184 300 v5 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv5 > root 2332 0.0 0.0 12184 300 v6 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv6 > root 2333 0.0 0.0 12184 300 v7 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv7 > maint1 2374 0.0 0.0 14636 384 0 Is 1Jun11 0:00.00 -sh (sh) > root 2377 0.0 0.0 41428 568 0 I 1Jun11 0:00.00 su > root 2378 0.0 0.0 17664 1376 0 I+ 1Jun11 0:00.04 _su (csh) > maint1 16216 0.0 0.0 14636 888 1 Is Thu04PM 0:00.00 -sh (sh) > root 16219 0.0 0.0 41428 1176 1 I Thu04PM 0:00.04 su > root 16220 0.0 0.0 17664 3004 1 S Thu04PM 0:00.09 _su (csh) > root 53623 0.0 0.0 14636 1640 1 S+ 4:01PM 0:00.00 /bin/sh > ./nfsdebug.sh > root 53633 0.0 0.0 14328 1304 1 R+ 4:01PM 0:00.00 ps -auxww > maint1 2449 0.0 0.0 14636 636 2 Is 1Jun11 0:00.01 -sh (sh) > root 17045 0.0 0.0 41428 1172 2 I Thu05PM 0:00.00 su > root 17046 0.0 0.0 17664 2076 2 I+ Thu05PM 0:00.03 _su (csh) > root 22684 0.0 0.0 41428 1240 3 Is Sat05PM 0:00.00 login [pam] (login) > root 22685 0.0 0.0 17664 1420 3 I Sat05PM 0:00.02 -csh (csh) > root 22696 0.0 0.0 18660 1260 3 S+ Sat05PM 2:20.85 bwm-ng > maint1 33243 0.0 0.0 14636 880 4 Is+ Wed12PM 0:00.00 -sh (sh) > root 35750 0.0 0.0 41428 984 5 Is 5:05PM 0:00.00 login [pam] (login) > root 35751 0.0 0.0 17664 1320 5 I+ 5:05PM 0:00.01 -csh (csh) > root 52811 0.0 0.0 41428 1152 6 Is 1:18PM 0:00.00 login [pam] (login) > root 52812 0.0 0.0 17664 1820 6 I+ 1:18PM 0:00.01 -csh (csh) > > # netstat -i > Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll > bce0 1500 00:10:18:8d:d0:a4 18340277 26 0 2512640 0 0 > bce0 1500 10.24.0.0 bb99za2a 12939843 - - 2511543 - - > bce0 1500 fe80::210:18f fe80::210:18ff:fe 0 - - 3 - - > bce1* 1500 00:10:18:8d:d0:a6 0 0 0 0 0 0 > cxgb0 9000 00:07:43:07:33:f8 4464851870 0 0 4378199683 0 0 > cxgb0 9000 172.21.21.0 172.21.21.83 4464472961 - - 4378064187 - - > cxgb0 9000 fe80::207:43f fe80::207:43ff:fe 0 - - 3 - - > cxgb1 1500 00:07:43:07:33:f9 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > lo0 16384 701 0 0 701 0 0 > lo0 16384 your-net localhost 645 - - 645 - - > lo0 16384 localhost ::1 56 - - 56 - - > lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - - > How about a: ps axHlww <-- With the "H" we'll see what the nfsd server threads are up to procstat -kka Oh, and a couple of nfsstats a few seconds apart. It's what the counts are changing by that might tell us what is going on. (You can use "-z" to zero them out, if you have an nfsstat built from recent sources.) Also, does a new NFS mount attempt against the server do anything? Thanks in advance for help with this, rick From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 13:37:41 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 38539106566B for ; Thu, 9 Jun 2011 13:37:41 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id A35178FC19 for ; Thu, 9 Jun 2011 13:37:40 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EANrL8E2DaFvO/2dsb2JhbABOAQSESaJoiHGtbJEDgSuCCQGBZIEKBI82gXSPbg X-IronPort-AV: E=Sophos;i="4.65,341,1304308800"; d="scan'208";a="127355299" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 09 Jun 2011 09:37:39 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 542B5B3F25; Thu, 9 Jun 2011 09:37:39 -0400 (EDT) Date: Thu, 9 Jun 2011 09:37:39 -0400 (EDT) From: Rick Macklem To: John Message-ID: <76853920.326059.1307626659333.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20110609020304.GA3986@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 13:37:41 -0000 John De wrote: > Hi, > > We've been running some stress tests of the new nfs server. > The system is at r222531 (head), 9 clients, two mounts each > to the server: > > mount_nfs -o > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > mount_nfs -o > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > ${servera}:/vol/datgen /c/$servera/vol/datgen > Oh, and just as an aside (ie. I'd still like to resolve what caused it to wedge.), I wouldn't recommend using UDP. Among other things, it doesn't provide backpressure (feedback to the client) when the server gets slower to respond due to heavy load. Since you are using UDP, you should do something like: "netstat -s | grep fragments" and look to the count of "ip fragments dropped due to timeout". (If that # is greater than 0, you'll never get good perf over UDP.) If you take "udp,rsize=32768,wsize=32768" off the mount commands, a recent FreeBSD client will use TCP and the largest rsize/wsize that's supported by the client and server, which should normally work better. > > The system is still up & responsive, simply no nfs services > are working. All (200) threads appear to be active, but not > doing anything. The debugger is not compiled into this kernel. > We can run any other tracing commands desired. We can also > rebuild the kernel with the debugger enabled for any kernel > debugging needed. > > > While things are running correctly, sysctl & top will for > instance show the following for nfsd (threads collapsed): > > vfs.nfsd.minthreads: 4 > vfs.nfsd.maxthreads: 200 > vfs.nfsd.threads: 60 > vfs.nfsrv.minthreads: 1 > vfs.nfsrv.maxthreads: 200 > vfs.nfsrv.threads: 0 > last pid: 35073; load averages: 6.74, 4.94, 4.56 up 6+22:17:25 > 16:16:25 > 111 processes: 13 running, 98 sleeping > Mem: 18M Active, 1048M Inact, 64G Wired, 8652K Cache, 9837M Buf, 28G > Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > 2049 root 61 49 0 10052K 1608K CPU2 0 49:43 1116.70% nfsd > > > Please let us know what we can do to help debug this. > > Thanks! > John > > > The output of the following commands is below: > > > uname -a > top -d 1 -b > head -n 7 /usr/src/.svn/entries > sysctl -a | grep nfsd > sysctl -a | grep nfs | grep -v nfsd > nfsstat -sW > ps -auxww > netstat -i # All nfs data traffic is via 10G chelsio cards. > > > Amusing thing to note is the negative numbers in the nfsstat > output :-) > > > FreeBSD bb99za2a.unx.sas.com 9.0-CURRENT FreeBSD 9.0-CURRENT #6: Wed > Jun 1 14:50:21 EDT 2011 > maint1@bb99za2a.unx.sas.com:/usr/obj/usr/src/sys/ZFS amd64 > last pid: 53625; load averages: 0.15, 0.07, 0.02 up 7+22:02:05 > 16:01:05 > 251 processes: 1 running, 250 sleeping > > Mem: 3584K Active, 1066M Inact, 87G Wired, 5844K Cache, 9837M Buf, > 5426M Free > Swap: > > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > > > 2049 root 200 52 0 10052K 3472K nfsrc 1 102:27 0.00% nfsd > 22696 root 1 20 0 18660K 1260K select 0 2:21 0.00% bwm-ng > 2373 maint1 1 20 0 68140K 3776K select 1 0:29 0.00% sshd > 22683 root 1 20 0 12184K 736K select 6 0:13 0.00% rlogind > 16215 maint1 1 20 0 68140K 3296K select 11 0:08 0.00% sshd > 2219 root 1 20 0 20508K 1732K select 6 0:05 0.00% sendmail > 2230 root 1 20 0 14260K 672K nanslp 6 0:02 0.00% cron > 1919 root 1 20 0 12312K 680K select 8 0:02 0.00% syslogd > 1680 root 1 20 0 6276K 360K select 2 0:01 0.00% devd > 2039 root 1 20 0 12308K 728K select 8 0:01 0.00% mountd > 1943 root 1 20 0 14392K 724K select 0 0:00 0.00% rpcbind > 2448 maint1 1 20 0 68140K 2200K select 3 0:00 0.00% sshd > 2223 smmsp 1 20 0 20508K 1388K pause 3 0:00 0.00% sendmail > 16220 root 1 20 0 17664K 3004K pause 1 0:00 0.00% csh > 2378 root 1 20 0 17664K 1376K ttyin 2 0:00 0.00% csh > 16219 maint1 1 27 0 41428K 1176K wait 1 0:00 0.00% su > 2283 root 1 20 0 16344K 644K select 7 0:00 0.00% inetd > 17046 root 1 20 0 17664K 2076K ttyin 7 0:00 0.00% csh > > 10 > > dir > 222531 > svn://svn.freebsd.org/base/head > svn://svn.freebsd.org/base > > kern.features.nfsd: 1 > vfs.nfsd.server_max_nfsvers: 4 > vfs.nfsd.server_min_nfsvers: 2 > vfs.nfsd.nfs_privport: 0 > vfs.nfsd.enable_locallocks: 0 > vfs.nfsd.issue_delegations: 0 > vfs.nfsd.commit_miss: 0 > vfs.nfsd.commit_blks: 17396119 > vfs.nfsd.mirrormnt: 1 > vfs.nfsd.minthreads: 4 > vfs.nfsd.maxthreads: 200 > vfs.nfsd.threads: 200 > vfs.nfsd.request_space_used: 632932 > vfs.nfsd.request_space_used_highest: 1044128 > vfs.nfsd.request_space_high: 47185920 > vfs.nfsd.request_space_low: 31457280 > vfs.nfsd.request_space_throttled: 0 > vfs.nfsd.request_space_throttle_count: 0 > vfs.nfsrv.fha.max_nfsds_per_fh: 8 > vfs.nfsrv.fha.max_reqs_per_nfsd: 4 > kern.features.nfscl: 1 > kern.features.nfsserver: 1 > vfs.nfs.downdelayinitial: 12 > vfs.nfs.downdelayinterval: 30 > vfs.nfs.keytab_enctype: 1 > vfs.nfs.skip_wcc_data_onerr: 1 > vfs.nfs.nfs3_jukebox_delay: 10 > vfs.nfs.reconnects: 0 > vfs.nfs.bufpackets: 4 > vfs.nfs.callback_addr: > vfs.nfs.realign_count: 0 > vfs.nfs.realign_test: 0 > vfs.nfs.nfs_directio_allow_mmap: 1 > vfs.nfs.nfs_directio_enable: 0 > vfs.nfs.clean_pages_on_close: 1 > vfs.nfs.commit_on_close: 0 > vfs.nfs.prime_access_cache: 0 > vfs.nfs.access_cache_timeout: 60 > vfs.nfs.diskless_rootpath: > vfs.nfs.diskless_valid: 0 > vfs.nfs.nfs_ip_paranoia: 1 > vfs.nfs.defect: 0 > vfs.nfs.iodmax: 20 > vfs.nfs.iodmin: 0 > vfs.nfs.iodmaxidle: 120 > vfs.acl_nfs4_old_semantics: 0 > vfs.nfs_common.realign_count: 0 > vfs.nfs_common.realign_test: 0 > vfs.nfsrv.nfs_privport: 0 > vfs.nfsrv.fha.bin_shift: 18 > vfs.nfsrv.fha.fhe_stats: No file handle entries. > vfs.nfsrv.commit_miss: 0 > vfs.nfsrv.commit_blks: 0 > vfs.nfsrv.async: 0 > vfs.nfsrv.gatherdelay_v3: 0 > vfs.nfsrv.gatherdelay: 10000 > vfs.nfsrv.minthreads: 1 > vfs.nfsrv.maxthreads: 200 > vfs.nfsrv.threads: 0 > vfs.nfsrv.request_space_used: 0 > vfs.nfsrv.request_space_used_highest: 0 > vfs.nfsrv.request_space_high: 47185920 > vfs.nfsrv.request_space_low: 31457280 > vfs.nfsrv.request_space_throttled: 0 > vfs.nfsrv.request_space_throttle_count: 0 > > Server Info: > Getattr Setattr Lookup Readlink Read Write Create Remove > 0 0 4859875 16546194 0 0 0 0 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 0 -1523364522 0 990131252 0 0 0 0 > Mknod Fsstat Fsinfo PathConf Commit > 0 0 0 0 0 > Server Ret-Failed > 0 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 189710 0 154619 -14704992 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 0 0 0 > > USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND > root 11 1180.6 0.0 0 192 ?? RL 1Jun11 130918:59.20 [idle] > root 0 0.0 0.0 0 5488 ?? DLs 1Jun11 476:54.70 [kernel] > root 1 0.0 0.0 6276 136 ?? ILs 1Jun11 0:00.03 /sbin/init -- > root 2 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify0] > root 3 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify1] > root 4 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [ciss_notify2] > root 5 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [sctp_iterator] > root 6 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [xpt_thrd] > root 7 0.0 0.0 0 16 ?? DL 1Jun11 0:12.17 [g_mp_kt] > root 8 0.0 0.0 0 16 ?? DL 1Jun11 0:22.25 [pagedaemon] > root 9 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [vmdaemon] > root 10 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [audit] > root 12 0.0 0.0 0 656 ?? WL 1Jun11 208:26.93 [intr] > root 13 0.0 0.0 0 48 ?? DL 1Jun11 35:45.18 [geom] > root 14 0.0 0.0 0 16 ?? DL 1Jun11 2:29.63 [yarrow] > root 15 0.0 0.0 0 384 ?? DL 1Jun11 0:12.44 [usb] > root 16 0.0 0.0 0 16 ?? DL 1Jun11 0:02.43 [acpi_thermal] > root 17 0.0 0.0 0 16 ?? DL 1Jun11 0:00.25 [acpi_cooling0] > root 18 0.0 0.0 0 16 ?? DL 1Jun11 0:00.00 [pagezero] > root 19 0.0 0.0 0 16 ?? DL 1Jun11 0:01.48 [bufdaemon] > root 20 0.0 0.0 0 16 ?? DL 1Jun11 51:24.22 [syncer] > root 21 0.0 0.0 0 16 ?? DL 1Jun11 0:02.15 [vnlru] > root 22 0.0 0.0 0 16 ?? DL 1Jun11 0:31.61 [softdepflush] > root 1624 0.0 0.0 14364 324 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p > /dev/ums0 -t auto -I /var/run/moused.ums0.pid > root 1648 0.0 0.0 14364 512 ?? Is 1Jun11 0:00.00 /usr/sbin/moused -p > /dev/ums1 -t auto -I /var/run/moused.ums1.pid > root 1680 0.0 0.0 6276 360 ?? Is 1Jun11 0:00.90 /sbin/devd > root 1919 0.0 0.0 12312 680 ?? Is 1Jun11 0:01.56 /usr/sbin/syslogd -s > root 1943 0.0 0.0 14392 724 ?? Is 1Jun11 0:00.32 /usr/sbin/rpcbind > root 2039 0.0 0.0 12308 728 ?? Is 1Jun11 0:00.58 /usr/sbin/mountd > /etc/exports /etc/zfs/exports > root 2048 0.0 0.0 10052 340 ?? Is 1Jun11 0:00.02 nfsd: master (nfsd) > root 2049 0.0 0.0 10052 3472 ?? D 1Jun11 4953:44.73 nfsd: server > (nfsd) > root 2211 0.0 0.0 47000 1600 ?? Is 1Jun11 0:00.00 /usr/sbin/sshd > root 2219 0.0 0.0 20508 1732 ?? Ss 1Jun11 0:05.04 sendmail: accepting > connections (sendmail) > smmsp 2223 0.0 0.0 20508 1388 ?? Is 1Jun11 0:00.12 sendmail: Queue > runner@00:30:00 for /var/spool/clientmqueue (sendmail) > root 2230 0.0 0.0 14260 672 ?? Ss 1Jun11 0:02.44 /usr/sbin/cron -s > root 2283 0.0 0.0 16344 644 ?? Is 1Jun11 0:00.03 /usr/sbin/inetd -wW > -C 60 > root 2371 0.0 0.0 68140 1444 ?? Is 1Jun11 0:00.02 sshd: maint1 [priv] > (sshd) > maint1 2373 0.0 0.0 68140 3776 ?? I 1Jun11 0:29.10 sshd: maint1@pts/0 > (sshd) > root 2383 0.0 0.0 0 128 ?? DL 1Jun11 60:18.89 [zfskern] > root 2446 0.0 0.0 68140 1460 ?? Is 1Jun11 0:00.01 sshd: maint1 [priv] > (sshd) > maint1 2448 0.0 0.0 68140 2200 ?? I 1Jun11 0:00.25 sshd: maint1@pts/2 > (sshd) > root 16213 0.0 0.0 68140 2900 ?? Is Thu04PM 0:00.01 sshd: maint1 > [priv] (sshd) > maint1 16215 0.0 0.0 68140 3296 ?? S Thu04PM 0:07.96 sshd: > maint1@pts/1 (sshd) > root 22683 0.0 0.0 12184 736 ?? Ss Sat05PM 0:13.37 rlogind > root 33240 0.0 0.0 68140 2740 ?? Is Wed12PM 0:00.01 sshd: maint1 > [priv] (sshd) > maint1 33242 0.0 0.0 68140 2780 ?? I Wed12PM 0:00.00 sshd: > maint1@pts/4 (sshd) > root 33279 0.0 0.0 0 16 ?? DL Wed12PM 36:13.14 [fct0-worker] > root 33281 0.0 0.0 0 16 ?? DL Wed12PM 2:09.48 [fct1-worker] > root 33283 0.0 0.0 0 16 ?? DL Wed12PM 2:05.68 [fioa-data-groom] > root 33284 0.0 0.0 0 16 ?? DL Wed12PM 10:48.29 [fio0-bio-submit] > root 33285 0.0 0.0 0 16 ?? DL Wed12PM 0:27.01 [fiob-data-groom] > root 33286 0.0 0.0 0 16 ?? DL Wed12PM 0:03.72 [fio1-bio-submit] > root 33689 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md0] > root 33691 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md1] > root 33693 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md2] > root 33695 0.0 0.0 0 16 ?? DL Wed12PM 0:00.00 [md3] > root 35749 0.0 0.0 12184 572 ?? Is 5:05PM 0:00.01 rlogind > root 52810 0.0 0.0 12184 724 ?? Is 1:18PM 0:00.00 rlogind > root 2326 0.0 0.0 41300 984 v0 Is 1Jun11 0:00.01 login [pam] (login) > root 34215 0.0 0.0 17664 2076 v0 I+ Wed01PM 0:00.01 -csh (csh) > root 2327 0.0 0.0 12184 300 v1 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv1 > root 2328 0.0 0.0 12184 300 v2 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv2 > root 2329 0.0 0.0 12184 300 v3 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv3 > root 2330 0.0 0.0 12184 300 v4 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv4 > root 2331 0.0 0.0 12184 300 v5 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv5 > root 2332 0.0 0.0 12184 300 v6 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv6 > root 2333 0.0 0.0 12184 300 v7 Is+ 1Jun11 0:00.00 /usr/libexec/getty > Pc ttyv7 > maint1 2374 0.0 0.0 14636 384 0 Is 1Jun11 0:00.00 -sh (sh) > root 2377 0.0 0.0 41428 568 0 I 1Jun11 0:00.00 su > root 2378 0.0 0.0 17664 1376 0 I+ 1Jun11 0:00.04 _su (csh) > maint1 16216 0.0 0.0 14636 888 1 Is Thu04PM 0:00.00 -sh (sh) > root 16219 0.0 0.0 41428 1176 1 I Thu04PM 0:00.04 su > root 16220 0.0 0.0 17664 3004 1 S Thu04PM 0:00.09 _su (csh) > root 53623 0.0 0.0 14636 1640 1 S+ 4:01PM 0:00.00 /bin/sh > ./nfsdebug.sh > root 53633 0.0 0.0 14328 1304 1 R+ 4:01PM 0:00.00 ps -auxww > maint1 2449 0.0 0.0 14636 636 2 Is 1Jun11 0:00.01 -sh (sh) > root 17045 0.0 0.0 41428 1172 2 I Thu05PM 0:00.00 su > root 17046 0.0 0.0 17664 2076 2 I+ Thu05PM 0:00.03 _su (csh) > root 22684 0.0 0.0 41428 1240 3 Is Sat05PM 0:00.00 login [pam] (login) > root 22685 0.0 0.0 17664 1420 3 I Sat05PM 0:00.02 -csh (csh) > root 22696 0.0 0.0 18660 1260 3 S+ Sat05PM 2:20.85 bwm-ng > maint1 33243 0.0 0.0 14636 880 4 Is+ Wed12PM 0:00.00 -sh (sh) > root 35750 0.0 0.0 41428 984 5 Is 5:05PM 0:00.00 login [pam] (login) > root 35751 0.0 0.0 17664 1320 5 I+ 5:05PM 0:00.01 -csh (csh) > root 52811 0.0 0.0 41428 1152 6 Is 1:18PM 0:00.00 login [pam] (login) > root 52812 0.0 0.0 17664 1820 6 I+ 1:18PM 0:00.01 -csh (csh) > > # netstat -i > Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll > bce0 1500 00:10:18:8d:d0:a4 18340277 26 0 2512640 0 0 > bce0 1500 10.24.0.0 bb99za2a 12939843 - - 2511543 - - > bce0 1500 fe80::210:18f fe80::210:18ff:fe 0 - - 3 - - > bce1* 1500 00:10:18:8d:d0:a6 0 0 0 0 0 0 > cxgb0 9000 00:07:43:07:33:f8 4464851870 0 0 4378199683 0 0 > cxgb0 9000 172.21.21.0 172.21.21.83 4464472961 - - 4378064187 - - > cxgb0 9000 fe80::207:43f fe80::207:43ff:fe 0 - - 3 - - > cxgb1 1500 00:07:43:07:33:f9 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > usbus 0 0 0 0 0 0 0 > lo0 16384 701 0 0 701 0 0 > lo0 16384 your-net localhost 645 - - 645 - - > lo0 16384 localhost ::1 56 - - 56 - - > lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - - > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 13:38:05 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: by hub.freebsd.org (Postfix, from userid 821) id B8EC9106566B; Thu, 9 Jun 2011 13:38:05 +0000 (UTC) Date: Thu, 9 Jun 2011 13:38:05 +0000 From: John To: Rick Macklem Message-ID: <20110609133805.GA78874@FreeBSD.org> References: <20110609020304.GA3986@FreeBSD.org> <795803957.322936.1307624343538.JavaMail.root@erie.cs.uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <795803957.322936.1307624343538.JavaMail.root@erie.cs.uoguelph.ca> User-Agent: Mutt/1.4.2.1i Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 13:38:05 -0000 ----- Rick Macklem's Original Message ----- > John De wrote: > > Hi, > > > > We've been running some stress tests of the new nfs server. > > The system is at r222531 (head), 9 clients, two mounts each > > to the server: > > > > mount_nfs -o > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > > mount_nfs -o > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > > > > The system is still up & responsive, simply no nfs services > > are working. All (200) threads appear to be active, but not > > doing anything. The debugger is not compiled into this kernel. > > We can run any other tracing commands desired. We can also > > rebuild the kernel with the debugger enabled for any kernel > > debugging needed. > > > > --- long logs deleted --- > > How about a: > ps axHlww <-- With the "H" we'll see what the nfsd server threads are up to > procstat -kka > > Oh, and a couple of nfsstats a few seconds apart. It's what the counts > are changing by that might tell us what is going on. (You can use "-z" > to zero them out, if you have an nfsstat built from recent sources.) > > Also, does a new NFS mount attempt against the server do anything? > > Thanks in advance for help with this, rick Hi Rick, Here's the output. In general, the nfsd processes appear to be in either nfsrvd_getcache(35 instances) or nfsrvd_updatecache(164) sleeping on "nfssrc". The server numbers don't appear to be moving. A showmount from a client system works, but a mount does not (see below). The underlying zfs filesystem seems to be working fine: cd /vol/datsrc /usr/bin/time find . -type f | wc -l 1.82 real 0.29 user 1.52 sys 354429 cd /vol/datgen /usr/bin/time find . -type f | wc -l 1.73 real 0.09 user 1.64 sys 153050 Is there a way to tell what cache block or file the servers are trying to process? Thanks! John servera# nfsstat -s Server Info: Getattr Setattr Lookup Readlink Read Write Create Remove 0 0 4859875 16546194 0 0 0 0 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 0 -1523364522 0 990131252 0 0 0 0 Mknod Fsstat Fsinfo PathConf Commit 0 0 0 0 0 Server Ret-Failed 0 Server Faults 0 Server Cache Stats: Inprog Idem Non-idem Misses 189710 0 154619 -14704992 Server Write Gathering: WriteOps WriteRPC Opsaved 0 0 0 servera# ps axHlww UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND 0 0 0 0 -16 0 0 5488 sched DLs ?? 3:14.67 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.16 [kernel] 0 0 0 0 -92 0 0 5488 - DLs ?? 0:25.05 [kernel] 0 0 0 0 -52 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:01.54 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:06.03 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:02.17 [kernel] 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:32.59 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.04 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.01 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.96 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.47 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.04 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.02 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.86 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.12 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.03 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.34 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.35 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.36 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.09 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.06 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:28.75 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.88 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.30 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.02 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.13 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.82 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.91 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.77 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.78 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.94 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.35 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.55 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:55.69 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:54.78 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.93 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.67 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.63 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.58 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 19:01.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:52.51 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.79 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.88 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 18:53.41 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.55 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.43 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.55 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.63 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.90 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.34 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:06.63 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.55 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.56 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.88 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.80 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.34 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:35.15 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.78 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.35 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.70 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.64 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.57 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.73 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.01 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.17 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.96 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.31 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.47 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.59 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.85 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.55 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.91 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.40 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.03 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.34 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.29 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.61 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.10 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:01.29 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.67 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.19 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.30 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.35 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.16 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.08 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.84 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.86 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.71 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.38 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.83 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.74 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.43 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.28 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.32 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.48 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.30 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.80 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.10 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.11 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.20 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.07 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.46 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.11 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.99 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.80 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.75 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.78 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.42 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.05 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.97 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.69 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 0 0 0 -16 0 0 5488 - DLs ?? 7:21.02 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:02.23 [kernel] 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] 0 1 0 0 46 0 6276 136 wait ILs ?? 0:00.03 /sbin/init -- 0 2 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify0] 0 3 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify1] 0 4 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify2] 0 5 0 0 -16 0 0 16 waitin DL ?? 0:00.00 [sctp_iterator] 0 6 0 0 -16 0 0 16 ccb_sc DL ?? 0:02.28 [xpt_thrd] 0 7 0 0 -16 0 0 16 gkt:wa DL ?? 0:13.28 [g_mp_kt] 0 8 0 0 -16 0 0 16 psleep DL ?? 0:23.19 [pagedaemon] 0 9 0 0 -16 0 0 16 psleep DL ?? 0:00.00 [vmdaemon] 0 10 0 0 -16 0 0 16 audit_ DL ?? 0:00.00 [audit] 0 11 0 0 155 0 0 192 - RL ?? 11904:17.58 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11896:58.10 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11940:00.33 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11976:07.78 [idle] 0 11 0 0 155 0 0 192 - RL ?? 12018:44.19 [idle] 0 11 0 0 155 0 0 192 - RL ?? 12058:52.25 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11736:53.91 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11826:27.92 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11896:29.94 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11944:07.66 [idle] 0 11 0 0 155 0 0 192 - RL ?? 11991:25.71 [idle] 0 11 0 0 155 0 0 192 - RL ?? 12012:04.00 [idle] 0 12 0 0 -60 0 0 656 - WL ?? 2:03.90 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:05.97 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:03.14 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:02.84 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.32 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.23 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:18.60 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:01.36 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.59 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.37 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.28 [intr] 0 12 0 0 -60 0 0 656 - WL ?? 0:00.21 [intr] 0 12 0 0 -72 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -64 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -56 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -52 0 0 656 - WL ?? 0:00.18 [intr] 0 12 0 0 -52 0 0 656 - WL ?? 0:00.03 [intr] 0 12 0 0 -68 0 0 656 - WL ?? 7:32.04 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 1:14.25 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:03.19 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 1:08.71 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:06.46 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:00.02 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 18:16.96 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 2:11.85 [intr] 0 12 0 0 -84 0 0 656 - WL ?? 0:00.15 [intr] 0 12 0 0 -76 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 1:36.16 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 82:56.38 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 82:38.72 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 9:01.80 [intr] 0 12 0 0 -88 0 0 656 - WL ?? 0:36.09 [intr] 0 13 0 0 -8 0 0 48 - DL ?? 0:12.24 [geom] 0 13 0 0 -8 0 0 48 - DL ?? 15:41.87 [geom] 0 13 0 0 -8 0 0 48 - DL ?? 19:52.32 [geom] 0 14 0 0 -16 0 0 16 - DL ?? 2:38.12 [yarrow] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:01.98 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:01.81 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:02.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:01.84 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.10 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.01 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:03.43 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.18 [usb] 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:02.10 [usb] 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] 0 16 0 0 -16 0 0 16 tzpoll DL ?? 0:02.64 [acpi_thermal] 0 17 0 0 -16 0 0 16 coolin DL ?? 0:00.27 [acpi_cooling0] 0 18 0 0 155 0 0 16 pgzero DL ?? 0:00.01 [pagezero] 0 19 0 0 -16 0 0 16 psleep DL ?? 0:01.61 [bufdaemon] 0 20 0 0 16 0 0 16 syncer DL ?? 56:18.96 [syncer] 0 21 0 0 -16 0 0 16 vlruwt DL ?? 0:02.32 [vnlru] 0 22 0 0 -16 0 0 16 sdflus DL ?? 0:35.36 [softdepflush] 0 1624 1 0 52 0 14364 296 select Is ?? 0:00.00 /usr/sbin/moused -p /dev/ums0 -t auto -I /var/run/moused.ums0.pid 0 1648 1 0 20 0 14364 468 select Is ?? 0:00.00 /usr/sbin/moused -p /dev/ums1 -t auto -I /var/run/moused.ums1.pid 0 1680 1 0 20 0 6276 504 select Is ?? 0:00.90 /sbin/devd 0 1919 1 0 20 0 12312 632 select Ss ?? 0:01.62 /usr/sbin/syslogd -s 0 1943 1 0 20 0 14392 776 select Ss ?? 0:00.35 /usr/sbin/rpcbind 0 2039 1 0 20 0 12308 756 select Is ?? 0:00.58 /usr/sbin/mountd /etc/exports /etc/zfs/exports 0 2048 1 0 20 0 10052 324 select Is ?? 0:00.02 nfsd: master (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 60:36.36 nfsd: server (nfsd) 0 2049 2048 0 32 0 10052 3444 nfsrc D ?? 0:02.53 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.14 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:24.71 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.52 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.25 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.50 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.54 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.84 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.85 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.96 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.32 nfsd: server (nfsd) 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.06 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.68 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.71 nfsd: server (nfsd) 0 2049 2048 0 22 0 10052 3444 nfsrc D ?? 0:00.05 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.25 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.70 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.62 nfsd: server (nfsd) 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:04.95 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.21 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.84 nfsd: server (nfsd) 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.18 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:08.21 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.82 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:17.75 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.89 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.81 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.02 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.79 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.00 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.30 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.76 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.98 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.94 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.30 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.09 nfsd: server (nfsd) 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:01.30 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.04 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:32.48 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.68 nfsd: server (nfsd) 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:03.67 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.83 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.37 nfsd: server (nfsd) 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:01.34 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.78 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.91 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.81 nfsd: server (nfsd) 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:07.41 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.53 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.23 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.18 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.40 nfsd: server (nfsd) 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.07 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.55 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.93 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.55 nfsd: server (nfsd) 0 2049 2048 0 28 0 10052 3444 nfsrc D ?? 0:00.91 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.31 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.83 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.63 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.82 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.41 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.96 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.68 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.45 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.24 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.75 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.34 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.29 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.95 nfsd: server (nfsd) 0 2049 2048 0 39 0 10052 3444 nfsrc D ?? 0:01.65 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.21 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.60 nfsd: server (nfsd) 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:11.93 nfsd: server (nfsd) 0 2049 2048 0 38 0 10052 3444 nfsrc D ?? 0:06.72 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.49 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.97 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.49 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.53 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:23.72 nfsd: server (nfsd) 0 2049 2048 0 26 0 10052 3444 nfsrc D ?? 0:05.74 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.63 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.22 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.39 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.94 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.50 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.52 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.81 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.13 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.01 nfsd: server (nfsd) 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:05.37 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.37 nfsd: server (nfsd) 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:06.37 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.29 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.05 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.12 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.83 nfsd: server (nfsd) 0 2049 2048 0 41 0 10052 3444 nfsrc D ?? 0:09.78 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.89 nfsd: server (nfsd) 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:10.27 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.13 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.28 nfsd: server (nfsd) 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:03.11 nfsd: server (nfsd) 0 2049 2048 0 33 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.19 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.19 nfsd: server (nfsd) 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.04 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:20.18 nfsd: server (nfsd) 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:00.30 nfsd: server (nfsd) 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.12 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.35 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.24 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.10 nfsd: server (nfsd) 0 2049 2048 0 31 0 10052 3444 nfsrc D ?? 0:17.05 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.80 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.39 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.50 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.00 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.63 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.41 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:18.59 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.87 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.90 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.02 nfsd: server (nfsd) 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:02.09 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:50.41 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.14 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.73 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.60 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.67 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.03 nfsd: server (nfsd) 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:00.27 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.00 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.50 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.69 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.82 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.62 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.33 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.04 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.66 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.51 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.13 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.19 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:12.93 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.87 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.13 nfsd: server (nfsd) 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:00.48 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.30 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.78 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.43 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.11 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.37 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:47.87 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.39 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.61 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.18 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.11 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.19 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:39.78 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.23 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 1:33.16 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.82 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.22 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.40 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.15 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.71 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.75 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.92 nfsd: server (nfsd) 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:05.15 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.84 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.36 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.87 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.95 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.00 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.95 nfsd: server (nfsd) 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:48.48 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.79 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.99 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:19.83 nfsd: server (nfsd) 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 2:27.55 nfsd: server (nfsd) 0 2211 1 0 20 0 47000 1572 select Is ?? 0:00.00 /usr/sbin/sshd 0 2219 1 0 20 0 20508 1732 select Ss ?? 0:05.53 sendmail: accepting connections (sendmail) 25 2223 1 0 20 0 20508 1260 pause Is ?? 0:00.13 sendmail: Queue runner@00:30:00 for /var/spool/clientmqueue (sendmail) 0 2230 1 0 20 0 14260 640 nanslp Is ?? 0:02.51 /usr/sbin/cron -s 0 2283 1 0 20 0 16344 708 select Is ?? 0:00.03 /usr/sbin/inetd -wW -C 60 0 2371 2211 0 26 0 68140 1416 sbwait Is ?? 0:00.02 sshd: nihard [priv] (sshd) 20275 2373 2371 0 20 0 68140 3736 select I ?? 0:29.10 sshd: nihard@pts/0 (sshd) 0 2383 0 0 -8 0 0 128 arc_re DL ?? 0:21.04 [zfskern] 0 2383 0 0 -8 0 0 128 l2arc_ DL ?? 49:07.10 [zfskern] 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.79 [zfskern] 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:09.11 [zfskern] 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.19 [zfskern] 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 13:05.43 [zfskern] 0 2446 2211 0 22 0 68140 1432 sbwait Is ?? 0:00.01 sshd: nihard [priv] (sshd) 20275 2448 2446 0 20 0 68140 1976 select I ?? 0:00.25 sshd: nihard@pts/2 (sshd) 0 16213 2211 0 23 0 68140 2872 sbwait Is ?? 0:00.01 sshd: nihard [priv] (sshd) 20275 16215 16213 0 20 0 68140 3048 select I ?? 0:07.97 sshd: nihard@pts/1 (sshd) 0 22683 2283 0 20 0 12184 708 select Ss ?? 0:15.22 rlogind 0 33240 2211 0 23 0 68140 2712 sbwait Is ?? 0:00.01 sshd: nihard [priv] (sshd) 20275 33242 33240 0 20 0 68140 2752 select I ?? 0:00.00 sshd: nihard@pts/4 (sshd) 0 33279 0 0 -8 0 0 16 fio_wo DL ?? 43:49.52 [fct0-worker] 0 33281 0 0 -8 0 0 16 fio_wo DL ?? 3:19.89 [fct1-worker] 0 33283 0 0 -8 0 0 16 fio_gr DL ?? 2:22.46 [fioa-data-groom] 0 33284 0 0 -8 0 0 16 fio_su DL ?? 10:48.41 [fio0-bio-submit] 0 33285 0 0 -8 0 0 16 fio_gr DL ?? 0:41.60 [fiob-data-groom] 0 33286 0 0 -8 0 0 16 fio_su DL ?? 0:03.84 [fio1-bio-submit] 0 33689 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md0] 0 33691 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md1] 0 33693 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md2] 0 33695 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md3] 0 35749 2283 0 20 0 12184 544 select Is ?? 0:00.01 rlogind 0 52810 2283 0 20 0 12184 680 select Is ?? 0:00.00 rlogind 0 55688 2283 0 20 0 12184 1140 select Ss ?? 0:00.01 rlogind 0 2326 1 0 20 0 41300 956 wait Is v0 0:00.01 login [pam] (login) 0 34215 2326 0 20 0 17664 1868 ttyin I+ v0 0:00.01 -csh (csh) 0 2327 1 0 52 0 12184 272 ttyin Is+ v1 0:00.00 /usr/libexec/getty Pc ttyv1 0 2328 1 0 52 0 12184 272 ttyin Is+ v2 0:00.00 /usr/libexec/getty Pc ttyv2 0 2329 1 0 52 0 12184 272 ttyin Is+ v3 0:00.00 /usr/libexec/getty Pc ttyv3 0 2330 1 0 52 0 12184 272 ttyin Is+ v4 0:00.00 /usr/libexec/getty Pc ttyv4 0 2331 1 0 52 0 12184 272 ttyin Is+ v5 0:00.00 /usr/libexec/getty Pc ttyv5 0 2332 1 0 52 0 12184 272 ttyin Is+ v6 0:00.00 /usr/libexec/getty Pc ttyv6 0 2333 1 0 52 0 12184 272 ttyin Is+ v7 0:00.00 /usr/libexec/getty Pc ttyv7 20275 2374 2373 0 27 0 14636 356 wait Is 0 0:00.00 -sh (sh) 0 2377 2374 0 27 0 41428 540 wait I 0 0:00.00 su 0 2378 2377 0 20 0 17664 1320 ttyin I+ 0 0:00.04 _su (csh) 20275 16216 16215 0 23 0 14636 860 wait Is 1 0:00.00 -sh (sh) 0 16219 16216 0 27 0 41428 1148 wait I 1 0:00.04 su 0 16220 16219 0 25 0 17664 2524 ttyin I+ 1 0:00.09 _su (csh) 20275 2449 2448 0 20 0 14636 608 wait Is 2 0:00.01 -sh (sh) 0 17045 2449 0 20 0 41428 1144 wait I 2 0:00.00 su 0 17046 17045 0 20 0 17664 1420 ttyin I+ 2 0:00.02 _su (csh) 0 22684 22683 0 21 0 41428 1212 wait Is 3 0:00.00 login [pam] (login) 0 22685 22684 0 20 0 17664 1388 pause I 3 0:00.01 -csh (csh) 0 22696 22685 0 20 0 18660 1228 select S+ 3 2:39.20 bwm-ng 20275 33243 33242 0 24 0 14636 852 ttyin Is+ 4 0:00.00 -sh (sh) 0 35750 35749 0 21 0 41428 956 wait Is 5 0:00.00 login [pam] (login) 0 35751 35750 0 20 0 17664 1288 ttyin I+ 5 0:00.01 -csh (csh) 0 52811 52810 0 21 0 41428 1116 wait Is 6 0:00.00 login [pam] (login) 0 52812 52811 0 20 0 17664 1760 ttyin I+ 6 0:00.01 -csh (csh) 0 55689 55688 0 21 0 41428 1552 wait Is 7 0:00.00 login [pam] (login) 0 55690 55689 0 20 0 17664 2600 pause I 7 0:00.02 -csh (csh) 0 55711 55690 0 20 0 12312 1140 select S+ 7 0:00.00 script /tmp/tmplog 0 55712 55711 0 20 0 17664 2860 pause Ss 8 0:00.01 /bin/csh -i 0 55717 55712 0 20 0 14328 2040 - R+ 8 0:00.00 ps axHlww servera# procstat -kka PID TID COMM TDNAME KSTACK 0 100000 kernel swapper mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 scheduler+0x34a mi_startup+0x77 btext+0x2c 0 100032 kernel firmware taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100034 kernel kqueue taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100035 kernel acpi_task_0 mi_switch+0x174 sleepq_wait+0x42 msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100036 kernel acpi_task_1 mi_switch+0x174 sleepq_wait+0x42 msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100037 kernel acpi_task_2 mi_switch+0x174 sleepq_wait+0x42 msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100039 kernel ffs_trim taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100040 kernel thread taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100046 kernel cxgbc0 taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100096 kernel mca taskq mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100177 kernel system_taskq_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100178 kernel system_taskq_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100179 kernel system_taskq_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100180 kernel system_taskq_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100181 kernel system_taskq_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100182 kernel system_taskq_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100183 kernel system_taskq_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100184 kernel system_taskq_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100185 kernel system_taskq_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100186 kernel system_taskq_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100187 kernel system_taskq_10 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100188 kernel system_taskq_11 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100429 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100506 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100507 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100508 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100509 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100510 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100511 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100512 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100513 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100514 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100515 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100516 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100517 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100518 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100519 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100520 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100521 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100522 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100523 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100524 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100525 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100526 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100527 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100528 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100529 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100530 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100531 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100532 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100533 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100534 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100535 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100536 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100537 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100538 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100539 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100540 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100541 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100542 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100543 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100544 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100545 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100546 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100547 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100548 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100549 kernel zio_write_intr_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100550 kernel zio_write_intr_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100551 kernel zio_write_intr_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100552 kernel zio_write_intr_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100553 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100554 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100555 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100556 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100557 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100558 kernel zio_free_issue_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100559 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100560 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100561 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100562 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100563 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100564 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100565 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100566 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100567 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100568 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100569 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100570 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100571 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100572 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100573 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100574 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100575 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100576 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100577 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100578 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100579 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100580 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100581 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100582 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100583 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100584 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100585 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100586 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100587 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100588 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100589 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100590 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100591 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100592 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100593 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100594 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100595 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100596 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100597 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100598 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100599 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100600 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100601 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100602 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100603 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100604 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100605 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100606 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100607 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100608 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100609 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100610 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100611 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100612 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100613 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100614 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100615 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100616 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100617 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100618 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100619 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100620 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100621 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100622 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100623 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100624 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100625 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100626 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100627 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100628 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100629 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100630 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100631 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100632 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100633 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100634 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100635 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100636 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100637 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100638 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100639 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100640 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100641 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100642 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100643 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100644 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100645 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100646 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100647 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100648 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100649 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100650 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100651 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100652 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100653 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100654 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100655 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100656 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100657 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100658 kernel zio_free_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100659 kernel zio_claim_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100660 kernel zio_claim_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100661 kernel zio_ioctl_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100662 kernel zio_ioctl_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100663 kernel zfs_vn_rele_task mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100667 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100668 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100986 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 100996 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101144 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101154 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101179 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101210 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101520 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101680 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 101722 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102295 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102309 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102311 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102330 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102351 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102356 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102722 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102818 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102821 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102823 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102827 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102829 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102832 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102833 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102834 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102836 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102838 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102846 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102847 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102849 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102857 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102859 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102864 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102869 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102874 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102875 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102881 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102885 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102902 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102906 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 102908 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103330 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103333 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103334 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103338 kernel zio_write_intr_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103341 kernel zio_write_intr_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103348 kernel zio_write_intr_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103351 kernel zio_write_intr_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103352 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103355 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103359 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103360 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103363 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103369 kernel zio_free_issue_0 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103374 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103375 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103378 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103379 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103380 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103381 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103382 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103385 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103387 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103388 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103391 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103393 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103394 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103396 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103397 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103398 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103400 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103404 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103405 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103406 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103410 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103412 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103414 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103416 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103417 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103418 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103421 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103425 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103428 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103432 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103942 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103943 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103944 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103945 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103946 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103947 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103948 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103949 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103950 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103951 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103952 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103953 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103954 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103955 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103956 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103957 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103958 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103959 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103960 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103961 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103962 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103963 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103964 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103965 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103966 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103967 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103968 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103969 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103970 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103971 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103972 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103973 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103974 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103975 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103976 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103977 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103978 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103979 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103980 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103981 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103982 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103983 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103984 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103985 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103986 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103987 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103988 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103989 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103990 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103991 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103992 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103993 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103994 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103995 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103996 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103997 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103998 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 103999 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104000 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104001 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104002 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104003 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104004 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104005 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104006 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104007 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104008 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104009 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104010 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104011 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104012 kernel zio_free_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104013 kernel zio_claim_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104014 kernel zio_claim_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104015 kernel zio_ioctl_issue mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104016 kernel zio_ioctl_intr mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104017 kernel zfs_vn_rele_task mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 0 104020 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f fork_trampoline+0xe 1 100002 init - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2 100048 ciss_notify0 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 ciss_notify_thread+0x2a5 fork_exit+0x11f fork_trampoline+0xe 3 100050 ciss_notify1 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 ciss_notify_thread+0x2a5 fork_exit+0x11f fork_trampoline+0xe 4 100081 ciss_notify2 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 ciss_notify_thread+0x2a5 fork_exit+0x11f fork_trampoline+0xe 5 100086 sctp_iterator - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 sctp_iterator_thread+0x54 fork_exit+0x11f fork_trampoline+0xe 6 100087 xpt_thrd - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 xpt_scanner_thread+0xfa fork_exit+0x11f fork_trampoline+0xe 7 100088 g_mp_kt - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 g_multipath_kt+0x23d fork_exit+0x11f fork_trampoline+0xe 8 100089 pagedaemon - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 vm_pageout+0x9ca fork_exit+0x11f fork_trampoline+0xe 9 100090 vmdaemon - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 vm_daemon+0x6a fork_exit+0x11f fork_trampoline+0xe 10 100001 audit - mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 audit_worker+0x384 fork_exit+0x11f fork_trampoline+0xe 11 100003 idle idle: cpu0 11 100004 idle idle: cpu1 11 100005 idle idle: cpu2 11 100006 idle idle: cpu3 11 100007 idle idle: cpu4 11 100008 idle idle: cpu5 11 100009 idle idle: cpu6 mi_switch+0x174 critical_exit+0x9b sched_idletd+0x280 fork_exit+0x11f fork_trampoline+0xe 11 100010 idle idle: cpu7 11 100011 idle idle: cpu8 11 100012 idle idle: cpu9 11 100013 idle idle: cpu10 11 100014 idle idle: cpu11 12 100015 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100016 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100017 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100018 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100019 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100020 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100021 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100022 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100023 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100024 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100025 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100026 intr swi4: clock mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100027 intr swi1: netisr 0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100028 intr swi3: vm 12 100038 intr swi5: + 12 100041 intr swi6: Giant task mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100042 intr swi6: task queue mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100043 intr swi2: cambio mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100044 intr irq256: bce0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100045 intr irq257: bce1 12 100047 intr irq267: ciss0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100049 intr irq268: ciss1 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100051 intr irq269: siis0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100052 intr irq20: uhci0 ehc mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100057 intr irq23: uhci1 uhc 12 100062 intr irq22: uhci2 uhc mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100079 intr irq17: atapci0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100080 intr irq270: ciss2 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100084 intr irq1: atkbd0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100085 intr swi0: uart uart 12 100135 intr irq258: cxgbc0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100136 intr irq259: cxgbc0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100137 intr irq260: cxgbc0 12 100138 intr irq261: cxgbc0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100139 intr irq262: cxgbc0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100140 intr irq263: cxgbc0 12 100141 intr irq264: cxgbc0 12 100142 intr irq265: cxgbc0 12 100143 intr irq266: cxgbc0 12 100359 intr irq58: fct0 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 12 100360 intr irq59: fct1 mi_switch+0x174 ithread_loop+0x216 fork_exit+0x11f fork_trampoline+0xe 13 100029 geom g_event mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 g_run_events+0x407 fork_exit+0x11f fork_trampoline+0xe 13 100030 geom g_up mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 g_io_schedule_up+0xd8 g_up_procbody+0x5c fork_exit+0x11f fork_trampoline+0xe 13 100031 geom g_down mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 g_io_schedule_down+0x20e g_down_procbody+0x5c fork_exit+0x11f fork_trampoline+0xe 14 100033 yarrow - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 random_kthread+0x1e2 fork_exit+0x11f fork_trampoline+0xe 15 100053 usb usbus0 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100054 usb usbus0 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100055 usb usbus0 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100056 usb usbus0 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100058 usb usbus1 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100059 usb usbus1 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100060 usb usbus1 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100061 usb usbus1 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100063 usb usbus2 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100064 usb usbus2 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100065 usb usbus2 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100066 usb usbus2 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100067 usb usbus3 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100068 usb usbus3 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100069 usb usbus3 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100070 usb usbus3 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100071 usb usbus4 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100072 usb usbus4 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100073 usb usbus4 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100074 usb usbus4 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100075 usb usbus5 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100076 usb usbus5 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100077 usb usbus5 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100078 usb usbus5 mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 16 100082 acpi_thermal - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 acpi_tz_thread+0x229 fork_exit+0x11f fork_trampoline+0xe 17 100083 acpi_cooling0 - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 acpi_tz_cooling_thread+0xdb fork_exit+0x11f fork_trampoline+0xe 18 100091 pagezero - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 vm_pagezero+0x83 fork_exit+0x11f fork_trampoline+0xe 19 100092 bufdaemon - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 buf_daemon+0x1cb fork_exit+0x11f fork_trampoline+0xe 20 100093 syncer - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 sched_sync+0x520 fork_exit+0x11f fork_trampoline+0xe 21 100094 vnlru - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 vnlru_proc+0x636 fork_exit+0x11f fork_trampoline+0xe 22 100095 softdepflush - mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 softdep_flush+0x35f fork_exit+0x11f fork_trampoline+0xe 1624 100104 moused - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 1648 100106 moused - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 1680 100102 devd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 1919 100105 syslogd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 1943 100158 rpcbind - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x134 seltdwait+0x98 poll+0x478 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2039 100164 mountd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2048 100129 nfsd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2049 100148 nfsd nfsd: master mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_run+0x8f nfsrvd_nfsd+0x92 nfssvc_nfsd+0x9b nfssvc+0x90 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2049 101905 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 102471 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 102554 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 103066 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 103521 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104129 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104274 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104285 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104290 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104298 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104579 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104675 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104678 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104680 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104682 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104691 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 104743 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105134 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105189 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105278 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105297 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105485 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105488 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105630 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105634 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105635 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105639 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105713 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105798 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105817 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 105851 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106248 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106260 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106279 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106337 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106351 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106386 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106462 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106826 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106895 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106898 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106907 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106919 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 106921 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107398 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107435 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107479 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107503 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107520 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107552 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107731 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 107835 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108000 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108004 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108007 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108027 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108029 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108035 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108083 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108160 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108221 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108235 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108336 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108376 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108488 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108599 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108605 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108625 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108741 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108742 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108748 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108749 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108769 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108885 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108901 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108902 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108908 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108918 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 108920 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109016 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109033 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109038 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109388 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109508 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109877 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109934 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 109950 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 110451 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 110828 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 110984 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 111355 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 111615 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 111887 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 111907 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112216 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112229 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112428 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112737 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112776 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112789 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112933 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 112941 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113251 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113277 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113284 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113290 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113471 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113765 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 113819 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 114541 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115084 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115086 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115327 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115331 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115333 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115340 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115342 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115381 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115400 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115409 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115640 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 115896 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 116237 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 116789 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 116890 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117299 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117304 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117319 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117406 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117440 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117446 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117462 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117473 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117958 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117988 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117992 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117994 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 117995 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118016 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118049 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118122 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118299 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118381 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118480 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118481 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118489 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118500 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118683 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118918 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118930 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118931 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 118937 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119081 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119090 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119119 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119123 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119124 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119453 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119470 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119483 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119567 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119619 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119624 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119635 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 119803 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120000 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120012 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120033 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120044 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120317 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120377 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120383 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120487 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120492 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120503 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120529 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120530 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120820 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120846 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120903 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120908 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120912 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 120914 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121023 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121025 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121029 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121031 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121032 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121041 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121042 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121043 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121044 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121360 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121361 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121363 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121364 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121365 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121366 nfsd nfsd: service mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 nfsrvd_updatecache+0x75 nfssvc_program+0x464 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2049 121367 nfsd nfsd: service mi_switch+0x174 sleepq_timedwait+0x42 _sleep+0x301 nfsrvd_getcache+0x1ec nfssvc_program+0x423 svc_run_internal+0x6e9 svc_thread_start+0xb fork_exit+0x11f fork_trampoline+0xe 2211 100132 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2219 100153 sendmail - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x134 seltdwait+0x98 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2223 100157 sendmail - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_sigsuspend+0xbc sigsuspend+0x34 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2230 100156 cron - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_timedwait_sig+0x19 _sleep+0x1b1 kern_nanosleep+0x118 nanosleep+0x6e syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2283 100099 inetd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2326 100131 login - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2327 100101 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2328 100109 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2329 100133 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2330 100134 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2331 100161 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2332 100165 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2333 100130 getty - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2371 100152 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 soreceive_generic+0x10f5 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2373 100176 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2374 100163 sh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2377 100155 su - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2378 100144 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2383 100147 zfskern arc_reclaim_thre mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 arc_reclaim_thread+0x2a9 fork_exit+0x11f fork_trampoline+0xe 2383 100189 zfskern l2arc_feed_threa mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 l2arc_feed_thread+0x1ce fork_exit+0x11f fork_trampoline+0xe 2383 100664 zfskern txg_thread_enter mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 txg_thread_wait+0x79 txg_quiesce_thread+0xb5 fork_exit+0x11f fork_trampoline+0xe 2383 100665 zfskern txg_thread_enter mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 txg_thread_wait+0x3c txg_sync_thread+0x26e fork_exit+0x11f fork_trampoline+0xe 2383 104018 zfskern txg_thread_enter mi_switch+0x174 sleepq_wait+0x42 _cv_wait+0x129 txg_thread_wait+0x79 txg_quiesce_thread+0xb5 fork_exit+0x11f fork_trampoline+0xe 2383 104019 zfskern txg_thread_enter mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 txg_thread_wait+0x3c txg_sync_thread+0x26e fork_exit+0x11f fork_trampoline+0xe 2446 100151 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 soreceive_generic+0x10f5 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2448 100160 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 2449 100097 sh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 16213 100150 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 soreceive_generic+0x10f5 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 16215 101688 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 16216 100168 sh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 16219 101474 su - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 16220 101834 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 17045 100117 su - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 17046 100123 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 22683 101694 rlogind - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 22684 100323 login - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 22685 102219 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_sigsuspend+0xbc sigsuspend+0x34 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 22696 101809 bwm-ng - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x134 seltdwait+0x98 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 33240 103853 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 soreceive_generic+0x10f5 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 33242 101836 sshd - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 33243 103735 sh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 33279 103833 fct0-worker - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.2.3.76+0x90 fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33281 103226 fct1-worker - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_5fb65.a9c484151da25a9eb60ef9a6e7309d1a95f.2.2.3.76+0x90 fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33283 100172 fioa-data-groom - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_d838e.beb0c7e6bc48f6823d158232eb95367ecc8.2.2.3.76+0x85c fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33284 101477 fio0-bio-submit - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_2dfe6.b17099ced75fb5a54fb23659d4191a15070.2.2.3.76+0xb9 fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33285 102220 fiob-data-groom - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_d838e.beb0c7e6bc48f6823d158232eb95367ecc8.2.2.3.76+0x85c fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33286 102218 fio1-bio-submit - mi_switch+0x174 sleepq_timedwait+0x42 _cv_timedwait+0x134 _fusion_cv_wait_timeout+0x51 ifio_2dfe6.b17099ced75fb5a54fb23659d4191a15070.2.2.3.76+0xb9 fusion_kthread_wrapper+0x75 fork_exit+0x11f fork_trampoline+0xe 33689 103845 md0 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 md_kthread+0x215 fork_exit+0x11f fork_trampoline+0xe 33691 103843 md1 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 md_kthread+0x215 fork_exit+0x11f fork_trampoline+0xe 33693 101820 md2 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 md_kthread+0x215 fork_exit+0x11f fork_trampoline+0xe 33695 101797 md3 - mi_switch+0x174 sleepq_wait+0x42 _sleep+0x317 md_kthread+0x215 fork_exit+0x11f fork_trampoline+0xe 34215 100118 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 35749 101482 rlogind - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 35750 101808 login - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 35751 101632 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 52810 101907 rlogind - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 52811 101476 login - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 52812 101827 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 tty_wait+0x25 ttydisc_read+0x2b1 ttydev_read+0x10f devfs_read_f+0x88 dofileread+0xa1 kern_readv+0x60 read+0x55 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55688 101803 rlogind - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _cv_wait_sig+0x128 seltdwait+0x110 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55689 101709 login - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_wait+0x6fd wait4+0x35 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55690 101702 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_sigsuspend+0xbc sigsuspend+0x34 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55711 117466 script - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x134 seltdwait+0x98 kern_select+0x64d select+0x5d syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55712 100238 csh - mi_switch+0x174 sleepq_catch_signals+0x2f4 sleepq_wait_sig+0x16 _sleep+0x269 kern_sigsuspend+0xbc sigsuspend+0x34 syscallenter+0x2cf syscall+0x4b Xfast_syscall+0xdd 55718 106122 procstat - servera# servera# echo mount from other system attempted here mount from other system attempted here serverb# showmount -e servera Exports list on servera: /vol/datgen 172.21.21.0 /vol/datsrc 172.21.21.0 serverb# mount servera:/vol/datsrc /mnt [tcp] servera:/vol/datsrc: NFSPROC_NULL: RPC: Timed out ^C serverb# servera# nfsstat -s Server Info: Getattr Setattr Lookup Readlink Read Write Create Remove 0 0 4859875 16546194 0 0 0 0 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 0 -1523364522 0 990131252 0 0 0 0 Mknod Fsstat Fsinfo PathConf Commit 0 0 0 0 0 Server Ret-Failed 0 Server Faults 0 Server Cache Stats: Inprog Idem Non-idem Misses 189710 0 154619 -14704992 Server Write Gathering: WriteOps WriteRPC Opsaved 0 0 0 servera# From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 14:11:35 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2EA79106568E for ; Thu, 9 Jun 2011 14:11:35 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 6F4C18FC1D for ; Thu, 9 Jun 2011 14:11:33 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EAA7U8E2DaFvO/2dsb2JhbABOAQSESaJoiHGtV5EAgSuCCQGBZIEKBJEqj24 X-IronPort-AV: E=Sophos;i="4.65,341,1304308800"; d="scan'208";a="123449380" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 09 Jun 2011 10:11:30 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id EFEF9B3F5F; Thu, 9 Jun 2011 10:11:29 -0400 (EDT) Date: Thu, 9 Jun 2011 10:11:29 -0400 (EDT) From: Rick Macklem To: John Message-ID: <2125999069.328959.1307628689953.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20110609133805.GA78874@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 14:11:35 -0000 John De wrote: > ----- Rick Macklem's Original Message ----- > > John De wrote: > > > Hi, > > > > > > We've been running some stress tests of the new nfs server. > > > The system is at r222531 (head), 9 clients, two mounts each > > > to the server: > > > > > > mount_nfs -o > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > > > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > > > mount_nfs -o > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > > > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > > > > > > > The system is still up & responsive, simply no nfs services > > > are working. All (200) threads appear to be active, but not > > > doing anything. The debugger is not compiled into this kernel. > > > We can run any other tracing commands desired. We can also > > > rebuild the kernel with the debugger enabled for any kernel > > > debugging needed. > > > > > > --- long logs deleted --- > > > > How about a: > > ps axHlww <-- With the "H" we'll see what the nfsd server threads > > are up to > > procstat -kka > > > > Oh, and a couple of nfsstats a few seconds apart. It's what the > > counts > > are changing by that might tell us what is going on. (You can use > > "-z" > > to zero them out, if you have an nfsstat built from recent sources.) > > > > Also, does a new NFS mount attempt against the server do anything? > > > > Thanks in advance for help with this, rick > > Hi Rick, > > Here's the output. In general, the nfsd processes appear to be in > either nfsrvd_getcache(35 instances) or nfsrvd_updatecache(164) > sleeping on > "nfssrc". The server numbers don't appear to be moving. A showmount > from a > client system works, but a mount does not (see below). > Ok, since all the nfsd threads are stuck sleeping on "nfsrc", I think it means that there is a bug in the DRC cache code where an entry doesn't get unlocked under some condition. I'll look into it and email you a patch once I think I've figured it out. Although I can't be sure, I suspect it is UDP specific. Thanks for digging into this, rick > The underlying zfs filesystem seems to be working fine: > > cd /vol/datsrc > /usr/bin/time find . -type f | wc -l > 1.82 real 0.29 user 1.52 sys > 354429 > > cd /vol/datgen > /usr/bin/time find . -type f | wc -l > 1.73 real 0.09 user 1.64 sys > 153050 > > > Is there a way to tell what cache block or file the servers are > trying to process? > > Thanks! > John > > > > > servera# nfsstat -s > > Server Info: > Getattr Setattr Lookup Readlink Read Write Create Remove > 0 0 4859875 16546194 0 0 0 0 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 0 -1523364522 0 990131252 0 0 0 0 > Mknod Fsstat Fsinfo PathConf Commit > 0 0 0 0 0 > Server Ret-Failed > 0 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 189710 0 154619 -14704992 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 0 0 0 > servera# ps axHlww > UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND > 0 0 0 0 -16 0 0 5488 sched DLs ?? 3:14.67 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.16 [kernel] > 0 0 0 0 -92 0 0 5488 - DLs ?? 0:25.05 [kernel] > 0 0 0 0 -52 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:01.54 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:06.03 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:02.17 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:32.59 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.04 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.47 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.04 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.86 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.03 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.36 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.09 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.06 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:28.75 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.13 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.82 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.91 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.94 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:55.69 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:54.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.93 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.67 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.58 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 19:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:52.51 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:53.41 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.43 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.90 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.34 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:06.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.56 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:35.15 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.64 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.73 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.17 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.31 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.47 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.59 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.85 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.91 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.40 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.03 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.29 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.61 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.10 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:01.29 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.67 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.16 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.08 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.84 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.86 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.38 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.83 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.74 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.43 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.28 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.32 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.48 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.11 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.46 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.11 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.99 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.75 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.42 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.05 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.69 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 7:21.02 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:02.23 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 1 0 0 46 0 6276 136 wait ILs ?? 0:00.03 /sbin/init -- > 0 2 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify0] > 0 3 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify1] > 0 4 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify2] > 0 5 0 0 -16 0 0 16 waitin DL ?? 0:00.00 [sctp_iterator] > 0 6 0 0 -16 0 0 16 ccb_sc DL ?? 0:02.28 [xpt_thrd] > 0 7 0 0 -16 0 0 16 gkt:wa DL ?? 0:13.28 [g_mp_kt] > 0 8 0 0 -16 0 0 16 psleep DL ?? 0:23.19 [pagedaemon] > 0 9 0 0 -16 0 0 16 psleep DL ?? 0:00.00 [vmdaemon] > 0 10 0 0 -16 0 0 16 audit_ DL ?? 0:00.00 [audit] > 0 11 0 0 155 0 0 192 - RL ?? 11904:17.58 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11896:58.10 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11940:00.33 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11976:07.78 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12018:44.19 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12058:52.25 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11736:53.91 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11826:27.92 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11896:29.94 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11944:07.66 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11991:25.71 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12012:04.00 [idle] > 0 12 0 0 -60 0 0 656 - WL ?? 2:03.90 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:05.97 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:03.14 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:02.84 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.32 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.23 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:18.60 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:01.36 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.59 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.37 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.28 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.21 [intr] > 0 12 0 0 -72 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -64 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -56 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -52 0 0 656 - WL ?? 0:00.18 [intr] > 0 12 0 0 -52 0 0 656 - WL ?? 0:00.03 [intr] > 0 12 0 0 -68 0 0 656 - WL ?? 7:32.04 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 1:14.25 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:03.19 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 1:08.71 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:06.46 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.02 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 18:16.96 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 2:11.85 [intr] > 0 12 0 0 -84 0 0 656 - WL ?? 0:00.15 [intr] > 0 12 0 0 -76 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 1:36.16 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 82:56.38 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 82:38.72 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 9:01.80 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:36.09 [intr] > 0 13 0 0 -8 0 0 48 - DL ?? 0:12.24 [geom] > 0 13 0 0 -8 0 0 48 - DL ?? 15:41.87 [geom] > 0 13 0 0 -8 0 0 48 - DL ?? 19:52.32 [geom] > 0 14 0 0 -16 0 0 16 - DL ?? 2:38.12 [yarrow] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.98 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.81 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:02.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.84 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.10 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.01 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:03.43 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.18 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:02.10 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 16 0 0 -16 0 0 16 tzpoll DL ?? 0:02.64 [acpi_thermal] > 0 17 0 0 -16 0 0 16 coolin DL ?? 0:00.27 [acpi_cooling0] > 0 18 0 0 155 0 0 16 pgzero DL ?? 0:00.01 [pagezero] > 0 19 0 0 -16 0 0 16 psleep DL ?? 0:01.61 [bufdaemon] > 0 20 0 0 16 0 0 16 syncer DL ?? 56:18.96 [syncer] > 0 21 0 0 -16 0 0 16 vlruwt DL ?? 0:02.32 [vnlru] > 0 22 0 0 -16 0 0 16 sdflus DL ?? 0:35.36 [softdepflush] > 0 1624 1 0 52 0 14364 296 select Is ?? 0:00.00 /usr/sbin/moused -p > /dev/ums0 -t auto -I /var/run/moused.ums0.pid > 0 1648 1 0 20 0 14364 468 select Is ?? 0:00.00 /usr/sbin/moused -p > /dev/ums1 -t auto -I /var/run/moused.ums1.pid > 0 1680 1 0 20 0 6276 504 select Is ?? 0:00.90 /sbin/devd > 0 1919 1 0 20 0 12312 632 select Ss ?? 0:01.62 /usr/sbin/syslogd -s > 0 1943 1 0 20 0 14392 776 select Ss ?? 0:00.35 /usr/sbin/rpcbind > 0 2039 1 0 20 0 12308 756 select Is ?? 0:00.58 /usr/sbin/mountd > /etc/exports /etc/zfs/exports > 0 2048 1 0 20 0 10052 324 select Is ?? 0:00.02 nfsd: master (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 60:36.36 nfsd: server (nfsd) > 0 2049 2048 0 32 0 10052 3444 nfsrc D ?? 0:02.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.14 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:24.71 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.52 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.54 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.84 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.85 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.32 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.06 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.68 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.71 nfsd: server (nfsd) > 0 2049 2048 0 22 0 10052 3444 nfsrc D ?? 0:00.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.70 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.62 nfsd: server (nfsd) > 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) > 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:04.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.84 nfsd: server (nfsd) > 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) > 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:08.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:17.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.89 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.81 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.02 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.79 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.76 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.98 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.94 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.09 nfsd: server (nfsd) > 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:01.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:32.48 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.68 nfsd: server (nfsd) > 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:03.67 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.37 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:01.34 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.91 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.81 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:07.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.23 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.40 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.55 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.93 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.55 nfsd: server (nfsd) > 0 2049 2048 0 28 0 10052 3444 nfsrc D ?? 0:00.91 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.31 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.68 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.45 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.24 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.34 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.29 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.95 nfsd: server (nfsd) > 0 2049 2048 0 39 0 10052 3444 nfsrc D ?? 0:01.65 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.60 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:11.93 nfsd: server (nfsd) > 0 2049 2048 0 38 0 10052 3444 nfsrc D ?? 0:06.72 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.49 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.97 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.49 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:23.72 nfsd: server (nfsd) > 0 2049 2048 0 26 0 10052 3444 nfsrc D ?? 0:05.74 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.22 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.94 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.52 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.81 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.01 nfsd: server (nfsd) > 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:05.37 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.37 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:06.37 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.29 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.12 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.83 nfsd: server (nfsd) > 0 2049 2048 0 41 0 10052 3444 nfsrc D ?? 0:09.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.89 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:10.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.28 nfsd: server (nfsd) > 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:03.11 nfsd: server (nfsd) > 0 2049 2048 0 33 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.19 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:20.18 nfsd: server (nfsd) > 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:00.30 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.12 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.35 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.24 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.10 nfsd: server (nfsd) > 0 2049 2048 0 31 0 10052 3444 nfsrc D ?? 0:17.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.80 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:18.59 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.90 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.02 nfsd: server (nfsd) > 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:02.09 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:50.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.14 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.73 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.60 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.67 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.03 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:00.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.69 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.62 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.33 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.66 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.51 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:12.93 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.13 nfsd: server (nfsd) > 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:00.48 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.43 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.37 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:47.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.61 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:39.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.23 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 1:33.16 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.22 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.40 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.15 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.71 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.92 nfsd: server (nfsd) > 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:05.15 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.84 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.36 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.00 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:48.48 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.79 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.99 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:19.83 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 2:27.55 nfsd: server (nfsd) > 0 2211 1 0 20 0 47000 1572 select Is ?? 0:00.00 /usr/sbin/sshd > 0 2219 1 0 20 0 20508 1732 select Ss ?? 0:05.53 sendmail: accepting > connections (sendmail) > 25 2223 1 0 20 0 20508 1260 pause Is ?? 0:00.13 sendmail: Queue > runner@00:30:00 for /var/spool/clientmqueue (sendmail) > 0 2230 1 0 20 0 14260 640 nanslp Is ?? 0:02.51 /usr/sbin/cron -s > 0 2283 1 0 20 0 16344 708 select Is ?? 0:00.03 /usr/sbin/inetd -wW -C > 60 > 0 2371 2211 0 26 0 68140 1416 sbwait Is ?? 0:00.02 sshd: nihard [priv] > (sshd) > 20275 2373 2371 0 20 0 68140 3736 select I ?? 0:29.10 sshd: > nihard@pts/0 (sshd) > 0 2383 0 0 -8 0 0 128 arc_re DL ?? 0:21.04 [zfskern] > 0 2383 0 0 -8 0 0 128 l2arc_ DL ?? 49:07.10 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.79 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:09.11 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.19 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 13:05.43 [zfskern] > 0 2446 2211 0 22 0 68140 1432 sbwait Is ?? 0:00.01 sshd: nihard [priv] > (sshd) > 20275 2448 2446 0 20 0 68140 1976 select I ?? 0:00.25 sshd: > nihard@pts/2 (sshd) > 0 16213 2211 0 23 0 68140 2872 sbwait Is ?? 0:00.01 sshd: nihard > [priv] (sshd) > 20275 16215 16213 0 20 0 68140 3048 select I ?? 0:07.97 sshd: > nihard@pts/1 (sshd) > 0 22683 2283 0 20 0 12184 708 select Ss ?? 0:15.22 rlogind > 0 33240 2211 0 23 0 68140 2712 sbwait Is ?? 0:00.01 sshd: nihard > [priv] (sshd) > 20275 33242 33240 0 20 0 68140 2752 select I ?? 0:00.00 sshd: > nihard@pts/4 (sshd) > 0 33279 0 0 -8 0 0 16 fio_wo DL ?? 43:49.52 [fct0-worker] > 0 33281 0 0 -8 0 0 16 fio_wo DL ?? 3:19.89 [fct1-worker] > 0 33283 0 0 -8 0 0 16 fio_gr DL ?? 2:22.46 [fioa-data-groom] > 0 33284 0 0 -8 0 0 16 fio_su DL ?? 10:48.41 [fio0-bio-submit] > 0 33285 0 0 -8 0 0 16 fio_gr DL ?? 0:41.60 [fiob-data-groom] > 0 33286 0 0 -8 0 0 16 fio_su DL ?? 0:03.84 [fio1-bio-submit] > 0 33689 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md0] > 0 33691 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md1] > 0 33693 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md2] > 0 33695 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md3] > 0 35749 2283 0 20 0 12184 544 select Is ?? 0:00.01 rlogind > 0 52810 2283 0 20 0 12184 680 select Is ?? 0:00.00 rlogind > 0 55688 2283 0 20 0 12184 1140 select Ss ?? 0:00.01 rlogind > 0 2326 1 0 20 0 41300 956 wait Is v0 0:00.01 login [pam] (login) > 0 34215 2326 0 20 0 17664 1868 ttyin I+ v0 0:00.01 -csh (csh) > 0 2327 1 0 52 0 12184 272 ttyin Is+ v1 0:00.00 /usr/libexec/getty Pc > ttyv1 > 0 2328 1 0 52 0 12184 272 ttyin Is+ v2 0:00.00 /usr/libexec/getty Pc > ttyv2 > 0 2329 1 0 52 0 12184 272 ttyin Is+ v3 0:00.00 /usr/libexec/getty Pc > ttyv3 > 0 2330 1 0 52 0 12184 272 ttyin Is+ v4 0:00.00 /usr/libexec/getty Pc > ttyv4 > 0 2331 1 0 52 0 12184 272 ttyin Is+ v5 0:00.00 /usr/libexec/getty Pc > ttyv5 > 0 2332 1 0 52 0 12184 272 ttyin Is+ v6 0:00.00 /usr/libexec/getty Pc > ttyv6 > 0 2333 1 0 52 0 12184 272 ttyin Is+ v7 0:00.00 /usr/libexec/getty Pc > ttyv7 > 20275 2374 2373 0 27 0 14636 356 wait Is 0 0:00.00 -sh (sh) > 0 2377 2374 0 27 0 41428 540 wait I 0 0:00.00 su > 0 2378 2377 0 20 0 17664 1320 ttyin I+ 0 0:00.04 _su (csh) > 20275 16216 16215 0 23 0 14636 860 wait Is 1 0:00.00 -sh (sh) > 0 16219 16216 0 27 0 41428 1148 wait I 1 0:00.04 su > 0 16220 16219 0 25 0 17664 2524 ttyin I+ 1 0:00.09 _su (csh) > 20275 2449 2448 0 20 0 14636 608 wait Is 2 0:00.01 -sh (sh) > 0 17045 2449 0 20 0 41428 1144 wait I 2 0:00.00 su > 0 17046 17045 0 20 0 17664 1420 ttyin I+ 2 0:00.02 _su (csh) > 0 22684 22683 0 21 0 41428 1212 wait Is 3 0:00.00 login [pam] (login) > 0 22685 22684 0 20 0 17664 1388 pause I 3 0:00.01 -csh (csh) > 0 22696 22685 0 20 0 18660 1228 select S+ 3 2:39.20 bwm-ng > 20275 33243 33242 0 24 0 14636 852 ttyin Is+ 4 0:00.00 -sh (sh) > 0 35750 35749 0 21 0 41428 956 wait Is 5 0:00.00 login [pam] (login) > 0 35751 35750 0 20 0 17664 1288 ttyin I+ 5 0:00.01 -csh (csh) > 0 52811 52810 0 21 0 41428 1116 wait Is 6 0:00.00 login [pam] (login) > 0 52812 52811 0 20 0 17664 1760 ttyin I+ 6 0:00.01 -csh (csh) > 0 55689 55688 0 21 0 41428 1552 wait Is 7 0:00.00 login [pam] (login) > 0 55690 55689 0 20 0 17664 2600 pause I 7 0:00.02 -csh (csh) > 0 55711 55690 0 20 0 12312 1140 select S+ 7 0:00.00 script /tmp/tmplog > 0 55712 55711 0 20 0 17664 2860 pause Ss 8 0:00.01 /bin/csh -i > 0 55717 55712 0 20 0 14328 2040 - R+ 8 0:00.00 ps axHlww > servera# procstat -kka > PID TID COMM TDNAME KSTACK > 0 100000 kernel swapper mi_switch+0x174 sleepq_timedwait+0x42 > _sleep+0x301 scheduler+0x34a mi_startup+0x77 btext+0x2c > 0 100032 kernel firmware taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100034 kernel kqueue taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100035 kernel acpi_task_0 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100036 kernel acpi_task_1 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100037 kernel acpi_task_2 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100039 kernel ffs_trim taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100040 kernel thread taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100046 kernel cxgbc0 taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100096 kernel mca taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100177 kernel system_taskq_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100178 kernel system_taskq_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100179 kernel system_taskq_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100180 kernel system_taskq_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100181 kernel system_taskq_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100182 kernel system_taskq_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100183 kernel system_taskq_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100184 kernel system_taskq_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100185 kernel system_taskq_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100186 kernel system_taskq_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100187 kernel system_taskq_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100188 kernel system_taskq_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100429 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100506 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100507 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100508 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100509 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100510 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100511 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100512 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100513 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100514 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100515 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100516 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100517 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100518 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100519 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100520 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100521 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100522 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100523 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100524 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100525 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100526 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100527 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100528 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100529 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100530 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100531 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100532 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100533 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100534 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100535 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100536 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100537 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100538 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100539 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100540 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100541 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100542 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100543 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100544 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100545 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100546 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100547 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100548 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100549 kernel zio_write_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100550 kernel zio_write_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100551 kernel zio_write_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100552 kernel zio_write_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100553 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100554 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100555 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100556 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100557 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100558 kernel zio_free_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100559 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100560 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100561 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100562 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100563 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100564 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100565 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100566 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100567 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100568 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100569 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100570 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100571 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100572 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100573 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100574 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100575 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100576 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100577 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100578 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100579 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100580 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100581 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100582 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100583 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100584 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100585 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100586 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100587 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100588 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100589 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100590 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100591 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100592 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100593 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100594 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100595 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100596 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100597 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100598 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100599 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100600 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100601 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100602 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100603 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100604 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100605 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100606 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100607 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100608 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100609 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100610 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100611 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100612 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100613 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100614 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100615 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100616 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100617 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100618 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100619 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100620 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100621 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100622 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100623 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100624 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100625 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100626 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100627 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100628 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100629 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100630 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100631 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100632 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100633 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100634 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100635 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100636 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100637 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100638 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100639 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100640 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100641 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100642 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100643 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100644 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100645 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100646 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100647 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100648 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100649 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100650 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100651 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100652 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100653 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100654 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100655 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100656 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100657 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100658 kernel zio_free_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100659 kernel zio_claim_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100660 kernel zio_claim_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100661 kernel zio_ioctl_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100662 kernel zio_ioctl_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100663 kernel zfs_vn_rele_task mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100667 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100668 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100986 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100996 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101144 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101154 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101179 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101210 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101520 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101680 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101722 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102295 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102309 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102311 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102330 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102351 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102356 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102722 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102818 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102821 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102823 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102827 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102829 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102832 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102833 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102834 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102836 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102838 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102846 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102847 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102849 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102857 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102859 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102864 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102869 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102874 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102875 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102881 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102885 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102902 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102906 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102908 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103330 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103333 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103334 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103338 kernel > > [Message truncated] From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 14:44:55 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6685D106564A for ; Thu, 9 Jun 2011 14:44:55 +0000 (UTC) (envelope-from gtodd@bellanet.org) Received: from mail-ew0-f54.google.com (mail-ew0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 029CF8FC18 for ; Thu, 9 Jun 2011 14:44:54 +0000 (UTC) Received: by ewy1 with SMTP id 1so817060ewy.13 for ; Thu, 09 Jun 2011 07:44:54 -0700 (PDT) Received: by 10.213.106.3 with SMTP id v3mr2756500ebo.40.1307628940095; Thu, 09 Jun 2011 07:15:40 -0700 (PDT) Received: from wawanesa.iciti.ca (CPE0080c8f208a5-CM001371173cf8.cpe.net.cable.rogers.com [99.246.61.82]) by mx.google.com with ESMTPS id g48sm1478545eea.12.2011.06.09.07.15.38 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 09 Jun 2011 07:15:38 -0700 (PDT) Message-ID: <4DF0D4F6.6020601@bellanet.org> Date: Thu, 09 Jun 2011 10:13:10 -0400 From: Graham Todd User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.13) Gecko/20110118 Thunderbird/3.1.7 MIME-Version: 1.0 To: freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 14:44:55 -0000 On 06/05/2011 22:32, Charles Sprickman wrote: > Hello all, > > I've been using a few different tools to manage zfs snapshots in different > scenarios. For local use, I've found that Ralf Engelschall's set of > scripts[1] that tie into the periodic(8) system work fairly well. I do > not use the amd portion since I am only working with zfs snapshots and I > don't see a need to actually re-mount the snapshots elsewhere for > recovery. The only limitation I find with this system is that for use on > a backups host the lack of a monthly or yearly retention period pretty > much rules it out. For local "oops" stuff though, it's great. FYI there's even a "port" of Ralf's tools (which I maintain). The scripts are pretty straightforward so the port is rather superfluous, but some people like to use ports for policy reasons and the like. To my mind the RSE snapshots tool excels at presenting a unified view of snapshots on a system that has a mixed UFS/ZFS filesystem layout. The script could be modified to account for different properties of UFS and ZFS fairly easily. e.g. ZFS obviates the need for certain subcommands (like "snapshot visit" which does not need to mount anything if the snapshot being "visited" is in a .zfs directory) and/or to increase the retention period (to work around the limit on the number of snapshots that a UFS filesystem can retain) but I'm not sure this would be all that useful. A better way to go might be to just wrap a generic "snapshot" command around subcommands and related periodic scripts in a plugin-ish/modular way to handle various mixtures of zfs ufs. Of course one day maybe "all our ufs are belong to zfs" (as ZVOLs) which might change things a bit. I'm not sure if UFS/SU+J will affect anything regarding snapshots on UFS ... maybe they'll be faster to create? Anyway, this was very useful thread. Snapshots are great and luckily there's lots of tools to choose from. PS: Note to self: snapshots != backups :) > For hosts acting as backups servers, I've been using Snapfilter[2] and > some cobbled together stuff that rsyncs a bunch of hosts and tries to > detect and notify on errors. Snapfilter simply is the zfs snapshot > "sweeper" that periodically deletes snapshots that are outside the defined > retention period(s). > > Since there seems to be a fair number of serious zfs users here, I was > hoping for some further suggestions for use in either case. Any input is > welcome... > > Thanks, > > Charles > > [1] - http://people.freebsd.org/~rse/snapshot/ > [2] - http://www.scottlu.com/Content/Snapfilter.html From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 15:16:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6FC60106567A for ; Thu, 9 Jun 2011 15:16:17 +0000 (UTC) (envelope-from tzim@tzim.net) Received: from orlith.tzim.net (unknown [IPv6:2001:41d0:2:1d32:21c:c0ff:fe82:92c6]) by mx1.freebsd.org (Postfix) with ESMTP id 0D6C88FC17 for ; Thu, 9 Jun 2011 15:16:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=secure.tzim.net) by orlith.tzim.net with esmtp (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUgxz-0005BE-S3 for freebsd-fs@freebsd.org; Thu, 09 Jun 2011 17:16:15 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Thu, 09 Jun 2011 17:16:15 +0200 From: Arnaud Houdelette To: In-Reply-To: References: Message-ID: X-Sender: tzim@tzim.net User-Agent: RoundCube Webmail/0.5.3 Subject: Re: zfs snapshot management X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 15:16:17 -0000 Hi ! I use home-made scripts as I found none that met my expectations. I used the following aproach : - Do 1 (or more) snapshots each day, named @`date -u +AUTO-%Y.%m.%d-%H.%M.%S-UTC`. (no @daily, @weekly, @monthly ... snapshots) - Run a cleanup script to destroy uneeded snapshots, using " zfs get -Hpo value creation $snap " to get snapshot times, thus avoiding daylight saving issues. Said script calculates first snapshot to keep and deletes older snapshots, then calculates next snapshot to keep and deletes 'in-betweens' and so on. I used said snapshots with a patched samba vfs_shadowcopy2 'til samba35 broke the patch. I can provide the scripts if wanted. Arnaud Houdelette On Sun, 5 Jun 2011 22:32:55 -0400 (EDT), Charles Sprickman wrote: > Hello all, > > I've been using a few different tools to manage zfs snapshots in > different > scenarios. For local use, I've found that Ralf Engelschall's set of > scripts[1] that tie into the periodic(8) system work fairly well. I > do > not use the amd portion since I am only working with zfs snapshots > and I > don't see a need to actually re-mount the snapshots elsewhere for > recovery. The only limitation I find with this system is that for > use on > a backups host the lack of a monthly or yearly retention period > pretty > much rules it out. For local "oops" stuff though, it's great. > > For hosts acting as backups servers, I've been using Snapfilter[2] > and > some cobbled together stuff that rsyncs a bunch of hosts and tries to > detect and notify on errors. Snapfilter simply is the zfs snapshot > "sweeper" that periodically deletes snapshots that are outside the > defined > retention period(s). > > Since there seems to be a fair number of serious zfs users here, I > was > hoping for some further suggestions for use in either case. Any > input is > welcome... > > Thanks, > > Charles > > [1] - http://people.freebsd.org/~rse/snapshot/ > [2] - http://www.scottlu.com/Content/Snapfilter.html > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 16:16:52 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E7EC3106564A for ; Thu, 9 Jun 2011 16:16:52 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 54D638FC15 for ; Thu, 9 Jun 2011 16:16:50 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAJjx8E2DaFvO/2dsb2JhbABOAQQbhC6iaYhxriCRBoM0AYFkgQoEkSqPbg X-IronPort-AV: E=Sophos;i="4.65,342,1304308800"; d="scan'208";a="127377547" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 09 Jun 2011 12:16:49 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id BD8E1B3F07; Thu, 9 Jun 2011 12:16:49 -0400 (EDT) Date: Thu, 9 Jun 2011 12:16:49 -0400 (EDT) From: Rick Macklem To: John Message-ID: <1069270455.338453.1307636209760.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20110609133805.GA78874@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_338452_1908611442.1307636209756" X-Originating-IP: [172.17.91.201] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 16:16:53 -0000 ------=_Part_338452_1908611442.1307636209756 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit John De wrote: > ----- Rick Macklem's Original Message ----- > > John De wrote: > > > Hi, > > > > > > We've been running some stress tests of the new nfs server. > > > The system is at r222531 (head), 9 clients, two mounts each > > > to the server: > > > > > > mount_nfs -o > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > > > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > > > mount_nfs -o > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > > > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > > > > > > > The system is still up & responsive, simply no nfs services > > > are working. All (200) threads appear to be active, but not > > > doing anything. The debugger is not compiled into this kernel. > > > We can run any other tracing commands desired. We can also > > > rebuild the kernel with the debugger enabled for any kernel > > > debugging needed. > > > > > > --- long logs deleted --- > > > > How about a: > > ps axHlww <-- With the "H" we'll see what the nfsd server threads > > are up to > > procstat -kka > > > > Oh, and a couple of nfsstats a few seconds apart. It's what the > > counts > > are changing by that might tell us what is going on. (You can use > > "-z" > > to zero them out, if you have an nfsstat built from recent sources.) > > > > Also, does a new NFS mount attempt against the server do anything? > > > > Thanks in advance for help with this, rick > > Hi Rick, > > Here's the output. In general, the nfsd processes appear to be in > either nfsrvd_getcache(35 instances) or nfsrvd_updatecache(164) > sleeping on > "nfssrc". The server numbers don't appear to be moving. A showmount > from a > client system works, but a mount does not (see below). > Please try the attached patch and let me know if it helps. When I looked I found several places where the rc_flag variable was being fiddled without the mutex held. I suspect one of these resulted in the RC_LOCKED flag not getting cleared, so all the threads got stuck waiting on it. The patch is at: http://people.freebsd.org/~rmacklem/cache.patch in case it gets eaten by the list handler. Thanks for digging into this, rick > The underlying zfs filesystem seems to be working fine: > > cd /vol/datsrc > /usr/bin/time find . -type f | wc -l > 1.82 real 0.29 user 1.52 sys > 354429 > > cd /vol/datgen > /usr/bin/time find . -type f | wc -l > 1.73 real 0.09 user 1.64 sys > 153050 > > > Is there a way to tell what cache block or file the servers are > trying to process? > > Thanks! > John > > > > > servera# nfsstat -s > > Server Info: > Getattr Setattr Lookup Readlink Read Write Create Remove > 0 0 4859875 16546194 0 0 0 0 > Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access > 0 -1523364522 0 990131252 0 0 0 0 > Mknod Fsstat Fsinfo PathConf Commit > 0 0 0 0 0 > Server Ret-Failed > 0 > Server Faults > 0 > Server Cache Stats: > Inprog Idem Non-idem Misses > 189710 0 154619 -14704992 > Server Write Gathering: > WriteOps WriteRPC Opsaved > 0 0 0 > servera# ps axHlww > UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND > 0 0 0 0 -16 0 0 5488 sched DLs ?? 3:14.67 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 8 0 0 5488 - DLs ?? 0:00.16 [kernel] > 0 0 0 0 -92 0 0 5488 - DLs ?? 0:25.05 [kernel] > 0 0 0 0 -52 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:01.54 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:06.03 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:02.17 [kernel] > 0 0 0 0 -100 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:32.59 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.04 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.47 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.04 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.86 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.95 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.98 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.03 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:02.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.36 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.09 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.06 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:28.75 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:53.13 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.82 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.91 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.94 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 1:52.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:55.69 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:54.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.93 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:57.67 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.58 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 19:01.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:52.51 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:56.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 18:53.41 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.43 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:53.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 4:52.90 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.34 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:06.63 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.56 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:34.88 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:35.15 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 12:33.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.64 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:03.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 8:02.73 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.01 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.17 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.31 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.47 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.59 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.85 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.55 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.91 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.40 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.03 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.34 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.29 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.61 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.10 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:01.29 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.67 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.66 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.19 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.35 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.16 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.08 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.84 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.86 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.38 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.83 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.74 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.43 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.28 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.32 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.48 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.30 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.10 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.12 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.14 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.89 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.71 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.11 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.20 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.07 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.96 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.46 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:10.11 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.99 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.80 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.02 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.25 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.75 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.92 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.78 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.79 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.42 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.23 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.57 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.05 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.33 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.77 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.39 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.37 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.97 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.21 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.45 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.44 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:08.69 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.70 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.24 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:09.18 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 0 0 0 -16 0 0 5488 - DLs ?? 7:21.02 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:02.23 [kernel] > 0 0 0 0 -8 0 0 5488 - DLs ?? 0:00.00 [kernel] > 0 1 0 0 46 0 6276 136 wait ILs ?? 0:00.03 /sbin/init -- > 0 2 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify0] > 0 3 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify1] > 0 4 0 0 -16 0 0 16 idle DL ?? 0:00.00 [ciss_notify2] > 0 5 0 0 -16 0 0 16 waitin DL ?? 0:00.00 [sctp_iterator] > 0 6 0 0 -16 0 0 16 ccb_sc DL ?? 0:02.28 [xpt_thrd] > 0 7 0 0 -16 0 0 16 gkt:wa DL ?? 0:13.28 [g_mp_kt] > 0 8 0 0 -16 0 0 16 psleep DL ?? 0:23.19 [pagedaemon] > 0 9 0 0 -16 0 0 16 psleep DL ?? 0:00.00 [vmdaemon] > 0 10 0 0 -16 0 0 16 audit_ DL ?? 0:00.00 [audit] > 0 11 0 0 155 0 0 192 - RL ?? 11904:17.58 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11896:58.10 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11940:00.33 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11976:07.78 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12018:44.19 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12058:52.25 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11736:53.91 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11826:27.92 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11896:29.94 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11944:07.66 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 11991:25.71 [idle] > 0 11 0 0 155 0 0 192 - RL ?? 12012:04.00 [idle] > 0 12 0 0 -60 0 0 656 - WL ?? 2:03.90 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:05.97 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:03.14 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:02.84 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.32 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.23 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:18.60 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:01.36 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.59 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.37 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.28 [intr] > 0 12 0 0 -60 0 0 656 - WL ?? 0:00.21 [intr] > 0 12 0 0 -72 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -64 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -56 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -52 0 0 656 - WL ?? 0:00.18 [intr] > 0 12 0 0 -52 0 0 656 - WL ?? 0:00.03 [intr] > 0 12 0 0 -68 0 0 656 - WL ?? 7:32.04 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 1:14.25 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:03.19 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 1:08.71 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:06.46 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.02 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 18:16.96 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 2:11.85 [intr] > 0 12 0 0 -84 0 0 656 - WL ?? 0:00.15 [intr] > 0 12 0 0 -76 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 1:36.16 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 82:56.38 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 82:38.72 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -92 0 0 656 - WL ?? 0:00.00 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 9:01.80 [intr] > 0 12 0 0 -88 0 0 656 - WL ?? 0:36.09 [intr] > 0 13 0 0 -8 0 0 48 - DL ?? 0:12.24 [geom] > 0 13 0 0 -8 0 0 48 - DL ?? 15:41.87 [geom] > 0 13 0 0 -8 0 0 48 - DL ?? 19:52.32 [geom] > 0 14 0 0 -16 0 0 16 - DL ?? 2:38.12 [yarrow] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.98 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.81 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:02.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:01.84 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.10 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.01 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:03.43 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.18 [usb] > 0 15 0 0 -72 0 0 384 - DL ?? 0:00.00 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:02.10 [usb] > 0 15 0 0 -68 0 0 384 - DL ?? 0:00.00 [usb] > 0 16 0 0 -16 0 0 16 tzpoll DL ?? 0:02.64 [acpi_thermal] > 0 17 0 0 -16 0 0 16 coolin DL ?? 0:00.27 [acpi_cooling0] > 0 18 0 0 155 0 0 16 pgzero DL ?? 0:00.01 [pagezero] > 0 19 0 0 -16 0 0 16 psleep DL ?? 0:01.61 [bufdaemon] > 0 20 0 0 16 0 0 16 syncer DL ?? 56:18.96 [syncer] > 0 21 0 0 -16 0 0 16 vlruwt DL ?? 0:02.32 [vnlru] > 0 22 0 0 -16 0 0 16 sdflus DL ?? 0:35.36 [softdepflush] > 0 1624 1 0 52 0 14364 296 select Is ?? 0:00.00 /usr/sbin/moused -p > /dev/ums0 -t auto -I /var/run/moused.ums0.pid > 0 1648 1 0 20 0 14364 468 select Is ?? 0:00.00 /usr/sbin/moused -p > /dev/ums1 -t auto -I /var/run/moused.ums1.pid > 0 1680 1 0 20 0 6276 504 select Is ?? 0:00.90 /sbin/devd > 0 1919 1 0 20 0 12312 632 select Ss ?? 0:01.62 /usr/sbin/syslogd -s > 0 1943 1 0 20 0 14392 776 select Ss ?? 0:00.35 /usr/sbin/rpcbind > 0 2039 1 0 20 0 12308 756 select Is ?? 0:00.58 /usr/sbin/mountd > /etc/exports /etc/zfs/exports > 0 2048 1 0 20 0 10052 324 select Is ?? 0:00.02 nfsd: master (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 60:36.36 nfsd: server (nfsd) > 0 2049 2048 0 32 0 10052 3444 nfsrc D ?? 0:02.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.14 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:24.71 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.52 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.54 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.84 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.85 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.32 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.06 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.68 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.71 nfsd: server (nfsd) > 0 2049 2048 0 22 0 10052 3444 nfsrc D ?? 0:00.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.70 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.62 nfsd: server (nfsd) > 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) > 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:04.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.84 nfsd: server (nfsd) > 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) > 0 2049 2048 0 25 0 10052 3444 nfsrc D ?? 0:08.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:17.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.89 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.81 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.02 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.79 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.76 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.98 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.94 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.09 nfsd: server (nfsd) > 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:01.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:32.48 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.68 nfsd: server (nfsd) > 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:03.67 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:15.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.37 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:01.34 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.91 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.81 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:07.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.23 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.40 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:02.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.55 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.93 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.55 nfsd: server (nfsd) > 0 2049 2048 0 28 0 10052 3444 nfsrc D ?? 0:00.91 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.31 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.68 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.45 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.24 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.34 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.96 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.29 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.95 nfsd: server (nfsd) > 0 2049 2048 0 39 0 10052 3444 nfsrc D ?? 0:01.65 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.21 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.60 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:11.93 nfsd: server (nfsd) > 0 2049 2048 0 38 0 10052 3444 nfsrc D ?? 0:06.72 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.49 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.97 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.31 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.49 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.53 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:23.72 nfsd: server (nfsd) > 0 2049 2048 0 26 0 10052 3444 nfsrc D ?? 0:05.74 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.16 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.22 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.94 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.52 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.81 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.01 nfsd: server (nfsd) > 0 2049 2048 0 21 0 10052 3444 nfsrc D ?? 0:05.37 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.37 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:06.37 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.29 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:12.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.12 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.07 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.83 nfsd: server (nfsd) > 0 2049 2048 0 41 0 10052 3444 nfsrc D ?? 0:09.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.89 nfsd: server (nfsd) > 0 2049 2048 0 23 0 10052 3444 nfsrc D ?? 0:10.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:14.28 nfsd: server (nfsd) > 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:03.11 nfsd: server (nfsd) > 0 2049 2048 0 33 0 10052 3444 nfsrc D ?? 0:07.64 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.19 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:20.18 nfsd: server (nfsd) > 0 2049 2048 0 34 0 10052 3444 nfsrc D ?? 0:00.30 nfsd: server (nfsd) > 0 2049 2048 0 27 0 10052 3444 nfsrc D ?? 0:08.12 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.35 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.24 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.10 nfsd: server (nfsd) > 0 2049 2048 0 31 0 10052 3444 nfsrc D ?? 0:17.05 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.80 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.63 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:18.59 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.90 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.02 nfsd: server (nfsd) > 0 2049 2048 0 37 0 10052 3444 nfsrc D ?? 0:02.09 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:50.41 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:03.14 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.73 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:11.60 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.67 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.03 nfsd: server (nfsd) > 0 2049 2048 0 24 0 10052 3444 nfsrc D ?? 0:00.27 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:02.00 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.50 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.69 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.62 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:21.33 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.04 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.66 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.51 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:16.13 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.25 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:12.93 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.13 nfsd: server (nfsd) > 0 2049 2048 0 30 0 10052 3444 nfsrc D ?? 0:00.48 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.30 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:13.43 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:01.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:00.37 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:47.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:09.39 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:10.61 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:08.99 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.18 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.11 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.19 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:39.78 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.23 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 1:33.16 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.82 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.22 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.40 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.15 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.71 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:05.75 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:04.92 nfsd: server (nfsd) > 0 2049 2048 0 43 0 10052 3444 nfsrc D ?? 0:05.15 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:25.84 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:07.36 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.87 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:06.83 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:49.00 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 0:48.95 nfsd: server (nfsd) > 0 2049 2048 0 20 0 10052 3444 nfsrc D ?? 0:48.48 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.79 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:26.99 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 3:19.83 nfsd: server (nfsd) > 0 2049 2048 0 52 0 10052 3444 nfsrc D ?? 2:27.55 nfsd: server (nfsd) > 0 2211 1 0 20 0 47000 1572 select Is ?? 0:00.00 /usr/sbin/sshd > 0 2219 1 0 20 0 20508 1732 select Ss ?? 0:05.53 sendmail: accepting > connections (sendmail) > 25 2223 1 0 20 0 20508 1260 pause Is ?? 0:00.13 sendmail: Queue > runner@00:30:00 for /var/spool/clientmqueue (sendmail) > 0 2230 1 0 20 0 14260 640 nanslp Is ?? 0:02.51 /usr/sbin/cron -s > 0 2283 1 0 20 0 16344 708 select Is ?? 0:00.03 /usr/sbin/inetd -wW -C > 60 > 0 2371 2211 0 26 0 68140 1416 sbwait Is ?? 0:00.02 sshd: nihard [priv] > (sshd) > 20275 2373 2371 0 20 0 68140 3736 select I ?? 0:29.10 sshd: > nihard@pts/0 (sshd) > 0 2383 0 0 -8 0 0 128 arc_re DL ?? 0:21.04 [zfskern] > 0 2383 0 0 -8 0 0 128 l2arc_ DL ?? 49:07.10 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.79 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:09.11 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 0:00.19 [zfskern] > 0 2383 0 0 -8 0 0 128 tx->tx DL ?? 13:05.43 [zfskern] > 0 2446 2211 0 22 0 68140 1432 sbwait Is ?? 0:00.01 sshd: nihard [priv] > (sshd) > 20275 2448 2446 0 20 0 68140 1976 select I ?? 0:00.25 sshd: > nihard@pts/2 (sshd) > 0 16213 2211 0 23 0 68140 2872 sbwait Is ?? 0:00.01 sshd: nihard > [priv] (sshd) > 20275 16215 16213 0 20 0 68140 3048 select I ?? 0:07.97 sshd: > nihard@pts/1 (sshd) > 0 22683 2283 0 20 0 12184 708 select Ss ?? 0:15.22 rlogind > 0 33240 2211 0 23 0 68140 2712 sbwait Is ?? 0:00.01 sshd: nihard > [priv] (sshd) > 20275 33242 33240 0 20 0 68140 2752 select I ?? 0:00.00 sshd: > nihard@pts/4 (sshd) > 0 33279 0 0 -8 0 0 16 fio_wo DL ?? 43:49.52 [fct0-worker] > 0 33281 0 0 -8 0 0 16 fio_wo DL ?? 3:19.89 [fct1-worker] > 0 33283 0 0 -8 0 0 16 fio_gr DL ?? 2:22.46 [fioa-data-groom] > 0 33284 0 0 -8 0 0 16 fio_su DL ?? 10:48.41 [fio0-bio-submit] > 0 33285 0 0 -8 0 0 16 fio_gr DL ?? 0:41.60 [fiob-data-groom] > 0 33286 0 0 -8 0 0 16 fio_su DL ?? 0:03.84 [fio1-bio-submit] > 0 33689 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md0] > 0 33691 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md1] > 0 33693 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md2] > 0 33695 0 0 -8 0 0 16 mdwait DL ?? 0:00.00 [md3] > 0 35749 2283 0 20 0 12184 544 select Is ?? 0:00.01 rlogind > 0 52810 2283 0 20 0 12184 680 select Is ?? 0:00.00 rlogind > 0 55688 2283 0 20 0 12184 1140 select Ss ?? 0:00.01 rlogind > 0 2326 1 0 20 0 41300 956 wait Is v0 0:00.01 login [pam] (login) > 0 34215 2326 0 20 0 17664 1868 ttyin I+ v0 0:00.01 -csh (csh) > 0 2327 1 0 52 0 12184 272 ttyin Is+ v1 0:00.00 /usr/libexec/getty Pc > ttyv1 > 0 2328 1 0 52 0 12184 272 ttyin Is+ v2 0:00.00 /usr/libexec/getty Pc > ttyv2 > 0 2329 1 0 52 0 12184 272 ttyin Is+ v3 0:00.00 /usr/libexec/getty Pc > ttyv3 > 0 2330 1 0 52 0 12184 272 ttyin Is+ v4 0:00.00 /usr/libexec/getty Pc > ttyv4 > 0 2331 1 0 52 0 12184 272 ttyin Is+ v5 0:00.00 /usr/libexec/getty Pc > ttyv5 > 0 2332 1 0 52 0 12184 272 ttyin Is+ v6 0:00.00 /usr/libexec/getty Pc > ttyv6 > 0 2333 1 0 52 0 12184 272 ttyin Is+ v7 0:00.00 /usr/libexec/getty Pc > ttyv7 > 20275 2374 2373 0 27 0 14636 356 wait Is 0 0:00.00 -sh (sh) > 0 2377 2374 0 27 0 41428 540 wait I 0 0:00.00 su > 0 2378 2377 0 20 0 17664 1320 ttyin I+ 0 0:00.04 _su (csh) > 20275 16216 16215 0 23 0 14636 860 wait Is 1 0:00.00 -sh (sh) > 0 16219 16216 0 27 0 41428 1148 wait I 1 0:00.04 su > 0 16220 16219 0 25 0 17664 2524 ttyin I+ 1 0:00.09 _su (csh) > 20275 2449 2448 0 20 0 14636 608 wait Is 2 0:00.01 -sh (sh) > 0 17045 2449 0 20 0 41428 1144 wait I 2 0:00.00 su > 0 17046 17045 0 20 0 17664 1420 ttyin I+ 2 0:00.02 _su (csh) > 0 22684 22683 0 21 0 41428 1212 wait Is 3 0:00.00 login [pam] (login) > 0 22685 22684 0 20 0 17664 1388 pause I 3 0:00.01 -csh (csh) > 0 22696 22685 0 20 0 18660 1228 select S+ 3 2:39.20 bwm-ng > 20275 33243 33242 0 24 0 14636 852 ttyin Is+ 4 0:00.00 -sh (sh) > 0 35750 35749 0 21 0 41428 956 wait Is 5 0:00.00 login [pam] (login) > 0 35751 35750 0 20 0 17664 1288 ttyin I+ 5 0:00.01 -csh (csh) > 0 52811 52810 0 21 0 41428 1116 wait Is 6 0:00.00 login [pam] (login) > 0 52812 52811 0 20 0 17664 1760 ttyin I+ 6 0:00.01 -csh (csh) > 0 55689 55688 0 21 0 41428 1552 wait Is 7 0:00.00 login [pam] (login) > 0 55690 55689 0 20 0 17664 2600 pause I 7 0:00.02 -csh (csh) > 0 55711 55690 0 20 0 12312 1140 select S+ 7 0:00.00 script /tmp/tmplog > 0 55712 55711 0 20 0 17664 2860 pause Ss 8 0:00.01 /bin/csh -i > 0 55717 55712 0 20 0 14328 2040 - R+ 8 0:00.00 ps axHlww > servera# procstat -kka > PID TID COMM TDNAME KSTACK > 0 100000 kernel swapper mi_switch+0x174 sleepq_timedwait+0x42 > _sleep+0x301 scheduler+0x34a mi_startup+0x77 btext+0x2c > 0 100032 kernel firmware taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100034 kernel kqueue taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100035 kernel acpi_task_0 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100036 kernel acpi_task_1 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100037 kernel acpi_task_2 mi_switch+0x174 sleepq_wait+0x42 > msleep_spin+0x1a2 taskqueue_thread_loop+0x67 fork_exit+0x11f > fork_trampoline+0xe > 0 100039 kernel ffs_trim taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100040 kernel thread taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100046 kernel cxgbc0 taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100096 kernel mca taskq mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100177 kernel system_taskq_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100178 kernel system_taskq_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100179 kernel system_taskq_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100180 kernel system_taskq_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100181 kernel system_taskq_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100182 kernel system_taskq_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100183 kernel system_taskq_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100184 kernel system_taskq_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100185 kernel system_taskq_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100186 kernel system_taskq_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100187 kernel system_taskq_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100188 kernel system_taskq_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100429 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100506 kernel zio_null_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100507 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100508 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100509 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100510 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100511 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100512 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100513 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100514 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100515 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100516 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100517 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100518 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100519 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100520 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100521 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100522 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100523 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100524 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100525 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100526 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100527 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100528 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100529 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100530 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100531 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100532 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100533 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100534 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100535 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100536 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100537 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100538 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100539 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100540 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100541 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100542 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100543 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100544 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100545 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100546 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100547 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100548 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100549 kernel zio_write_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100550 kernel zio_write_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100551 kernel zio_write_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100552 kernel zio_write_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100553 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100554 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100555 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100556 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100557 kernel zio_write_intr_h mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100558 kernel zio_free_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100559 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100560 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100561 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100562 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100563 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100564 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100565 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100566 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100567 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100568 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100569 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100570 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100571 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100572 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100573 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100574 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100575 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100576 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100577 kernel zio_free_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100578 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100579 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100580 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100581 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100582 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100583 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100584 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100585 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100586 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100587 kernel zio_free_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100588 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100589 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100590 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100591 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100592 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100593 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100594 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100595 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100596 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100597 kernel zio_free_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100598 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100599 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100600 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100601 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100602 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100603 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100604 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100605 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100606 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100607 kernel zio_free_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100608 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100609 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100610 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100611 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100612 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100613 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100614 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100615 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100616 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100617 kernel zio_free_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100618 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100619 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100620 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100621 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100622 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100623 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100624 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100625 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100626 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100627 kernel zio_free_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100628 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100629 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100630 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100631 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100632 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100633 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100634 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100635 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100636 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100637 kernel zio_free_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100638 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100639 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100640 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100641 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100642 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100643 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100644 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100645 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100646 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100647 kernel zio_free_issue_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100648 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100649 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100650 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100651 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100652 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100653 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100654 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100655 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100656 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100657 kernel zio_free_issue_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100658 kernel zio_free_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100659 kernel zio_claim_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100660 kernel zio_claim_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100661 kernel zio_ioctl_issue mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100662 kernel zio_ioctl_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100663 kernel zfs_vn_rele_task mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100667 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100668 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100986 kernel zio_null_intr mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 100996 kernel zio_read_issue_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101144 kernel zio_read_issue_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101154 kernel zio_read_issue_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101179 kernel zio_read_issue_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101210 kernel zio_read_issue_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101520 kernel zio_read_issue_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101680 kernel zio_read_issue_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 101722 kernel zio_read_issue_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102295 kernel zio_read_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102309 kernel zio_read_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102311 kernel zio_read_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102330 kernel zio_read_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102351 kernel zio_read_intr_4 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102356 kernel zio_read_intr_5 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102722 kernel zio_read_intr_6 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102818 kernel zio_read_intr_7 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102821 kernel zio_read_intr_8 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102823 kernel zio_read_intr_9 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102827 kernel zio_read_intr_10 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102829 kernel zio_read_intr_11 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102832 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102833 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102834 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102836 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102838 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102846 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102847 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102849 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102857 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102859 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102864 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102869 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102874 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102875 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102881 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102885 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102902 kernel zio_write_issue_ mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102906 kernel zio_write_intr_0 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 102908 kernel zil_clean mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103330 kernel zio_write_intr_1 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103333 kernel zio_write_intr_2 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103334 kernel zio_write_intr_3 mi_switch+0x174 sleepq_wait+0x42 > _sleep+0x317 taskqueue_thread_loop+0xbc fork_exit+0x11f > fork_trampoline+0xe > 0 103338 kernel > > [Message truncated] ------=_Part_338452_1908611442.1307636209756 Content-Type: text/x-patch; name=cache.patch Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=cache.patch LS0tIGZzL25mc3NlcnZlci9uZnNfbmZzZGNhY2hlLmMuc2F2CTIwMTEtMDYtMDkgMTE6Mzk6NTIu MDAwMDAwMDAwIC0wNDAwCisrKyBmcy9uZnNzZXJ2ZXIvbmZzX25mc2RjYWNoZS5jCTIwMTEtMDYt MDkgMTE6NDA6MDUuMDAwMDAwMDAwIC0wNDAwCkBAIC00NTcsOSArNDU3LDkgQEAgbmZzcnZkX3Vw ZGF0ZWNhY2hlKHN0cnVjdCBuZnNydl9kZXNjcmlwdAogCQl9CiAJCWlmICgobmQtPm5kX2ZsYWcg JiBORF9ORlNWMikgJiYKIAkJICAgIG5mc3YyX3JlcHN0YXRbbmV3bmZzdjJfcHJvY2lkW25kLT5u ZF9wcm9jbnVtXV0pIHsKLQkJCU5GU1VOTE9DS0NBQ0hFKCk7CiAJCQlycC0+cmNfc3RhdHVzID0g bmQtPm5kX3JlcHN0YXQ7CiAJCQlycC0+cmNfZmxhZyB8PSBSQ19SRVBTVEFUVVM7CisJCQlORlNV TkxPQ0tDQUNIRSgpOwogCQl9IGVsc2UgewogCQkJaWYgKCEocnAtPnJjX2ZsYWcgJiBSQ19VRFAp KSB7CiAJCQkgICAgbmZzcmNfdGNwc2F2ZWRyZXBsaWVzKys7CkBAIC00NzEsNyArNDcxLDkgQEAg bmZzcnZkX3VwZGF0ZWNhY2hlKHN0cnVjdCBuZnNydl9kZXNjcmlwdAogCQkJTkZTVU5MT0NLQ0FD SEUoKTsKIAkJCXJwLT5yY19yZXBseSA9IG1fY29weW0obmQtPm5kX21yZXEsIDAsIE1fQ09QWUFM TCwKIAkJCSAgICBNX1dBSVQpOworCQkJTkZTTE9DS0NBQ0hFKCk7CiAJCQlycC0+cmNfZmxhZyB8 PSBSQ19SRVBNQlVGOworCQkJTkZTVU5MT0NLQ0FDSEUoKTsKIAkJfQogCQlpZiAocnAtPnJjX2Zs YWcgJiBSQ19VRFApIHsKIAkJCXJwLT5yY190aW1lc3RhbXAgPSBORlNEX01PTk9TRUMgKwpAQCAt NTI2LDggKzUyOCwxMSBAQCBuZnNydmRfc2VudGNhY2hlKHN0cnVjdCBuZnNydmNhY2hlICpycCwg CiAJCSAgICAgc28tPnNvX3Byb3RvLT5wcl9kb21haW4tPmRvbV9mYW1pbHkgIT0gQUZfSU5FVDYp IHx8CiAJCSAgICAgc28tPnNvX3Byb3RvLT5wcl9wcm90b2NvbCAhPSBJUFBST1RPX1RDUCkKIAkJ CXBhbmljKCJuZnMgc2VudCBjYWNoZSIpOwotCQlpZiAobmZzcnZfZ2V0c29ja3NlcW51bShzbywg JnJwLT5yY190Y3BzZXEpKQorCQlpZiAobmZzcnZfZ2V0c29ja3NlcW51bShzbywgJnJwLT5yY190 Y3BzZXEpKSB7CisJCQlORlNMT0NLQ0FDSEUoKTsKIAkJCXJwLT5yY19mbGFnIHw9IFJDX1RDUFNF UTsKKwkJCU5GU1VOTE9DS0NBQ0hFKCk7CisJCX0KIAl9CiAJbmZzcmNfdW5sb2NrKHJwKTsKIH0K QEAgLTY4Nyw4ICs2OTIsMTEgQEAgbmZzcmNfbG9jayhzdHJ1Y3QgbmZzcnZjYWNoZSAqcnApCiBz dGF0aWMgdm9pZAogbmZzcmNfdW5sb2NrKHN0cnVjdCBuZnNydmNhY2hlICpycCkKIHsKKworCU5G U0xPQ0tDQUNIRSgpOwogCXJwLT5yY19mbGFnICY9IH5SQ19MT0NLRUQ7CiAJbmZzcmNfd2FudGVk KHJwKTsKKwlORlNVTkxPQ0tDQUNIRSgpOwogfQogCiAvKgo= ------=_Part_338452_1908611442.1307636209756-- From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 19:04:39 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9AFF71065674; Thu, 9 Jun 2011 19:04:39 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7375F8FC15; Thu, 9 Jun 2011 19:04:39 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p59J4dHE056876; Thu, 9 Jun 2011 19:04:39 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p59J4dpZ056872; Thu, 9 Jun 2011 19:04:39 GMT (envelope-from linimon) Date: Thu, 9 Jun 2011 19:04:39 GMT Message-Id: <201106091904.p59J4dpZ056872@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/157728: [zfs] zfs (v28) incremental receive may leave behind temporary clones X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 19:04:39 -0000 Old Synopsis: zfs (v28) incremental receive may leave behind temporary clones New Synopsis: [zfs] zfs (v28) incremental receive may leave behind temporary clones Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Jun 9 19:04:18 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=157728 From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 19:38:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 61B51106566B for ; Thu, 9 Jun 2011 19:38:17 +0000 (UTC) (envelope-from danny@dannysplace.net) Received: from mailgw.dannysplace.net (mailgw.dannysplace.net [204.109.56.184]) by mx1.freebsd.org (Postfix) with ESMTP id 29CCD8FC13 for ; Thu, 9 Jun 2011 19:38:16 +0000 (UTC) Received: from localhost ([127.0.0.1]) by mailgw.dannysplace.net with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUklM-0007Rv-EY for freebsd-fs@freebsd.org; Fri, 10 Jun 2011 05:19:29 +1000 Message-ID: <4DF11BB8.5030805@dannysplace.net> Date: Thu, 09 Jun 2011 21:15:04 +0200 From: Dan Carroll User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: danny@dannysplace.net X-Authenticated-User: danny X-Authenticator: plain X-Exim-Version: 4.76 (build at 08-Jun-2011 18:40:49) X-Date: 2011-06-10 05:19:28 X-Connected-IP: 127.0.0.1:10349 X-Message-Linecount: 21 X-Body-Linecount: 10 X-Message-Size: 896 X-Body-Size: 456 X-Received-Count: 1 X-Recipient-Count: 1 X-Local-Recipient-Count: 1 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: danny@dannysplace.net X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on damka.dannysplace.net X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.3.1 X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on mailgw.dannysplace.net) Subject: Getting access to checksums. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: fbsd@dannysplace.net List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 19:38:17 -0000 I'm currently working on a system that monitors file changes. I'd like to calculate the checksums on each file both to see if a change has occured as well as looking for duplicate files. I'm not sure what algorithm I'll end up using but I was wondering if it was possible to get access to ZFS' checksumming? Does it happen on a file level or is it block level only? And if it does, is there an easy way to obtain this information from the system? -D From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 20:23:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 53E361065670 for ; Thu, 9 Jun 2011 20:23:44 +0000 (UTC) (envelope-from feld@feld.me) Received: from mwi1.coffeenet.org (mwi1.coffeenet.org [66.170.3.2]) by mx1.freebsd.org (Postfix) with ESMTP id 2EA3B8FC14 for ; Thu, 9 Jun 2011 20:23:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=feld.me; s=blargle; h=In-Reply-To:Message-Id:From:Mime-Version:Date:References:Subject:To:Content-Type; bh=zH3J0D7jBEd4TJrJvaMMyRBpASjIMnGAt6mDlFjkLPI=; b=aAdc7kmZHdM0FBxUwvyhj9iP4r0k4UUcBwJl1DJqPRwO9ZAYyhKLUXzSLBmlZXzuZFb5DBh01ARgDa8H/HEJUSn7aeV3gssBj4SW5Ij0J/gvKEgppazE10KFnP+/D6MT; Received: from localhost ([127.0.0.1] helo=mwi1.coffeenet.org) by mwi1.coffeenet.org with esmtp (Exim 4.76 (FreeBSD)) (envelope-from ) id 1QUllo-000IxS-Ay for freebsd-fs@freebsd.org; Thu, 09 Jun 2011 15:24:00 -0500 Received: from feld@feld.me by mwi1.coffeenet.org (Archiveopteryx 3.1.3) with esmtpsa id 1307651034-47978-47977/7/6; Thu, 9 Jun 2011 20:23:54 +0000 Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes To: freebsd-fs@freebsd.org References: <4DF11BB8.5030805@dannysplace.net> Date: Thu, 9 Jun 2011 15:22:22 -0500 Mime-Version: 1.0 From: Mark Felder Message-Id: In-Reply-To: <4DF11BB8.5030805@dannysplace.net> User-Agent: Opera Mail/11.11 (FreeBSD) Subject: Re: Getting access to checksums. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 20:23:44 -0000 On Thu, 09 Jun 2011 14:15:04 -0500, Dan Carroll wrote: > I'm not sure what algorithm I'll end up using but I was wondering if it > was possible to get access to ZFS' checksumming? > Does it happen on a file level or is it block level only? And if it > does, is there an easy way to obtain this information from the system? It's block level and I'm not sure how accessible it is to end users. Regards, Mark From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 21:24:14 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9186D1065673; Thu, 9 Jun 2011 21:24:14 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 6A82C8FC08; Thu, 9 Jun 2011 21:24:14 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p59LOEtF085030; Thu, 9 Jun 2011 21:24:14 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p59LOEp6085026; Thu, 9 Jun 2011 21:24:14 GMT (envelope-from linimon) Date: Thu, 9 Jun 2011 21:24:14 GMT Message-Id: <201106092124.p59LOEp6085026@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/157722: [geli] unable to newfs a geli encrypted partition X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 21:24:14 -0000 Old Synopsis: unable to newfs a geli encrypted partition New Synopsis: [geli] unable to newfs a geli encrypted partition Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Jun 9 21:23:47 UTC 2011 Responsible-Changed-Why: Reclassify and assign. http://www.freebsd.org/cgi/query-pr.cgi?pr=157722 From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 21:29:12 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F78E106566B; Thu, 9 Jun 2011 21:29:12 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 08BDD8FC14; Thu, 9 Jun 2011 21:29:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p59LTBpA085377; Thu, 9 Jun 2011 21:29:11 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p59LTBXK085373; Thu, 9 Jun 2011 21:29:11 GMT (envelope-from linimon) Date: Thu, 9 Jun 2011 21:29:11 GMT Message-Id: <201106092129.p59LTBXK085373@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: bin/157691: [zfs] [patch] zpool import -d broken X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 21:29:12 -0000 Synopsis: [zfs] [patch] zpool import -d broken Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Thu Jun 9 21:28:54 UTC 2011 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=157691 From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 21:31:47 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 450B5106564A for ; Thu, 9 Jun 2011 21:31:47 +0000 (UTC) (envelope-from kmacybsd@gmail.com) Received: from mail-vx0-f182.google.com (mail-vx0-f182.google.com [209.85.220.182]) by mx1.freebsd.org (Postfix) with ESMTP id EEF718FC19 for ; Thu, 9 Jun 2011 21:31:46 +0000 (UTC) Received: by vxc34 with SMTP id 34so2246100vxc.13 for ; Thu, 09 Jun 2011 14:31:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=jtNnizk+8mCTSOYRGUpbqjyIbkXPUBnmUT+xf3f482A=; b=huEbBghxHrWzrCJU3ckO102mNJgySMArcbDeIG9Xz2dXbeKdpqqYsBd4uF/cFSGBuY zRB0Q5VoDSgOKwHn+CnZJUCiWZ0QlYG5WFbfW1hYc4kMjaTKCsTJ9zVG3t81oJulh6I9 bEUoMvQrvKSLVYSFtjTOalupKPHI1WYSAC4fM= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=wJwyp9swoVM7HViZZ+J2Ayr346aeaQPUDUqoVLI+low8+5wxfp6cuZs5mDwpo//8Tq rpAbN9XGgvYq8SKJ4mHvNpn0lDFgsKs73B1UuK5Hq6hSOmfXVMnMehn7z2Dh3akNd3mX srSOKdY5EN0MZnjLaltbTMqeDYklnymZSXw10= MIME-Version: 1.0 Received: by 10.52.177.234 with SMTP id ct10mr1575827vdc.2.1307653425420; Thu, 09 Jun 2011 14:03:45 -0700 (PDT) Sender: kmacybsd@gmail.com Received: by 10.52.187.74 with HTTP; Thu, 9 Jun 2011 14:03:45 -0700 (PDT) In-Reply-To: <4DF11BB8.5030805@dannysplace.net> References: <4DF11BB8.5030805@dannysplace.net> Date: Thu, 9 Jun 2011 23:03:45 +0200 X-Google-Sender-Auth: md35AC42K0QZvTRinyn3_qzrHhk Message-ID: From: "K. Macy" To: fbsd@dannysplace.net Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: Getting access to checksums. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 21:31:47 -0000 On Thu, Jun 9, 2011 at 9:15 PM, Dan Carroll wrote: > I'm currently working on a system that monitors file changes. > I'd like to calculate the checksums on each file both to see if a change = has > occured as well as looking for duplicate files. > > I'm not sure what algorithm I'll end up using but I was wondering if it w= as > possible to get access to ZFS' checksumming? > Does it happen on a file level or is it block level only? =A0And if it do= es, > is there an easy way to obtain this information from the system? > The ZFS user tools effectively work by running ZFS in userland. One could use the ZFS library to do what you're asking for. I doubt it would be easy enough to be worth the effort, but if you're motivated it might be worth looking in to. -Kip From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 21:39:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 31EFC106564A for ; Thu, 9 Jun 2011 21:39:11 +0000 (UTC) (envelope-from ef@math.uni-bonn.de) Received: from ems.math.uni-bonn.de (ems.math.uni-bonn.de [131.220.132.179]) by mx1.freebsd.org (Postfix) with ESMTP id EEC4B8FC0C for ; Thu, 9 Jun 2011 21:39:10 +0000 (UTC) Received: from mz4.intra.net (pD9E806EB.dip0.t-ipconnect.de [217.232.6.235]) by ems.math.uni-bonn.de (Postfix) with ESMTPSA id 73501BC9D3 for ; Thu, 9 Jun 2011 23:39:09 +0200 (CEST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1084) From: =?iso-8859-1?Q?Edgar_Fu=DF?= In-Reply-To: <20110531155046.GB9327@gumme.math.uni-bonn.de> Date: Thu, 9 Jun 2011 23:39:07 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <9BDA8959-C25B-4075-ACAF-3FC8F761A69F@math.uni-bonn.de> References: <20110531155046.GB9327@gumme.math.uni-bonn.de> To: freebsd-fs@freebsd.org X-Mailer: Apple Mail (2.1084) Subject: Re(try): softdep-related panic (allocdirect_merge) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 21:39:11 -0000 More than a week ago, I asked about softdep-related panics I'm = experiencing on NetBSD to find out whether there may be a fix in FreeBSD = not having been ported over. Being unfamiliar with the FreeBSD groups, to me, not having received an = answer could mean any of the following: -- this is the wrong place to ask -- my question just slipped through -- the answer is so obvious that nobody cares to mail it -- nobody knows -- only Kirk McKusick knows and he's busy/on vacation -- nobody cares (I can't imagine that) -- I just need to be more patient Could someone please enlighten me?= From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 21:57:17 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E35A51065674 for ; Thu, 9 Jun 2011 21:57:17 +0000 (UTC) (envelope-from rsimmons0@gmail.com) Received: from mail-gx0-f182.google.com (mail-gx0-f182.google.com [209.85.161.182]) by mx1.freebsd.org (Postfix) with ESMTP id A28818FC2C for ; Thu, 9 Jun 2011 21:57:17 +0000 (UTC) Received: by gxk28 with SMTP id 28so1516416gxk.13 for ; Thu, 09 Jun 2011 14:57:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type:content-transfer-encoding; bh=lkwpAIbfWgGPMPKfZVgtkXKrATukmD2OUJytWW0gqGs=; b=O5LR5DxsRLzWbwVuhPx8lKVHFL5lekQoTeRaDPEIbFg1f7FQYIebovpfJ2sjfPcXQ3 R492eQ5R1DxEKxUBQp1eAQNKwEEDMXIoRzTq8Z8LMoHnibHYG9QGf8W9kTOdGS8a1Yaw Y19PYwBiDINaAJ8dSwvOvnhSgYtXyGriNYKFo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=EbWZu+IakadmZtF6JdWQsohysxSTsZHb66jBW9UGxxp0I+g+9yCPO3WAaNjp86BkR2 4v8ywO1lUGINO0WHJooiTDGHlczFKz0SP2pRY3LihIDT3sLJ2VlPsAJnrfgyvXhElOLz 365/+UEZ3cDLyPJoznEaikz5zUy0JeRtE6VSo= MIME-Version: 1.0 Received: by 10.100.17.35 with SMTP id 35mr1259923anq.1.1307656636767; Thu, 09 Jun 2011 14:57:16 -0700 (PDT) Received: by 10.100.243.35 with HTTP; Thu, 9 Jun 2011 14:57:16 -0700 (PDT) In-Reply-To: <4DF11BB8.5030805@dannysplace.net> References: <4DF11BB8.5030805@dannysplace.net> Date: Thu, 9 Jun 2011 17:57:16 -0400 Message-ID: From: Robert Simmons To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: Getting access to checksums. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 21:57:18 -0000 On Thu, Jun 9, 2011 at 3:15 PM, Dan Carroll wrote: > I'm currently working on a system that monitors file changes. > I'd like to calculate the checksums on each file both to see if a change = has > occured as well as looking for duplicate files. > > I'm not sure what algorithm I'll end up using but I was wondering if it w= as > possible to get access to ZFS' checksumming? > Does it happen on a file level or is it block level only? =A0And if it do= es, > is there an easy way to obtain this information from the system? You may not want to reinvent the wheel. There are quite a few ports that do what you want, more or less. You may want to start there, and if they don't serve your purpose, then maybe do it the hard way. ;) The main one is the famous Tripwire: http://www.freebsd.org/cgi/url.cgi?ports/security/tripwire/pkg-descr http://www.freebsd.org/cgi/url.cgi?ports/security/tripwire12/pkg-descr http://www.freebsd.org/cgi/url.cgi?ports/security/tripwire-131/pkg-descr Then there are some replacements for Tripwire that you may want to look at as well: http://www.freebsd.org/cgi/url.cgi?ports/security/aide/pkg-descr http://www.freebsd.org/cgi/url.cgi?ports/security/yafic/pkg-descr Also, if you really want to roll your own, just write a script and use the built-in checksum utilities: http://www.freebsd.org/cgi/man.cgi?query=3Dsha256&apropos=3D0&sektion=3D0&m= anpath=3DFreeBSD+8.2-RELEASE&format=3Dhtml And find(1): http://www.freebsd.org/cgi/man.cgi?query=3Dfind&apropos=3D0&sektion=3D0&man= path=3DFreeBSD+8.2-RELEASE&format=3Dhtml From owner-freebsd-fs@FreeBSD.ORG Thu Jun 9 22:58:49 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B23E31065672; Thu, 9 Jun 2011 22:58:49 +0000 (UTC) (envelope-from delphij@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8A85D8FC17; Thu, 9 Jun 2011 22:58:49 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p59Mwn2k067097; Thu, 9 Jun 2011 22:58:49 GMT (envelope-from delphij@freefall.freebsd.org) Received: (from delphij@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p59MwnAX067091; Thu, 9 Jun 2011 22:58:49 GMT (envelope-from delphij) Date: Thu, 9 Jun 2011 22:58:49 GMT Message-Id: <201106092258.p59MwnAX067091@freefall.freebsd.org> To: cjk32@cam.ac.uk, delphij@FreeBSD.org, freebsd-fs@FreeBSD.org, delphij@FreeBSD.org From: delphij@FreeBSD.org Cc: Subject: Re: bin/157691: [zfs] [patch] zpool import -d broken X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2011 22:58:49 -0000 Synopsis: [zfs] [patch] zpool import -d broken State-Changed-From-To: open->closed State-Changed-By: delphij State-Changed-When: Thu Jun 9 22:58:07 UTC 2011 State-Changed-Why: This should have been fixed in a newer 8-STABLE snapshot, per submitter. Responsible-Changed-From-To: freebsd-fs->delphij Responsible-Changed-By: delphij Responsible-Changed-When: Thu Jun 9 22:58:07 UTC 2011 Responsible-Changed-Why: Take just in case. http://www.freebsd.org/cgi/query-pr.cgi?pr=157691 From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 08:55:43 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BDFC71065672 for ; Fri, 10 Jun 2011 08:55:43 +0000 (UTC) (envelope-from kpielorz_lst@tdx.co.uk) Received: from mail.tdx.com (mail.tdx.com [62.13.128.18]) by mx1.freebsd.org (Postfix) with ESMTP id 594BC8FC12 for ; Fri, 10 Jun 2011 08:55:43 +0000 (UTC) Received: from HexaDeca64.dmpriest.net.uk (HPQuadro64.dmpriest.net.uk [62.13.130.30]) (authenticated bits=0) by mail.tdx.com (8.14.3/8.14.3/Kp) with ESMTP id p5A8hFw6020460 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO) for ; Fri, 10 Jun 2011 09:43:16 +0100 (BST) Date: Fri, 10 Jun 2011 09:43:14 +0100 From: Karl Pielorz To: freebsd-fs@freebsd.org Message-ID: <729A0755FAEF480774EEF4AB@HexaDeca64.dmpriest.net.uk> X-Mailer: Mulberry/4.0.8 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: ZFS scrub 'repaired' pool with no chksum or read errors? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 08:55:43 -0000 Hi, I'm running FreeBSD-8.2R amd64 w/4Gb of ECC RAM on a machine used for 'offsite' backups (that are copied to it using zfs send/receive). I scrub this machine every now and again (about once a month) - recently this resulted in the following output after the scrub completed: " # zpool status pool: vol state: ONLINE scrub: scrub completed after 2h49m with 0 errors on Thu Jun 9 17:09:31 2011 config: NAME STATE READ WRITE CKSUM vol ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ada0 ONLINE 0 0 0 256K repaired ada1 ONLINE 0 0 0 ada2 ONLINE 0 0 0 errors: No known data errors " Should I be worried there was 256k of 'repairs' done, even though there were no checksum errors, or read errors detected? The console logged no errors - and nothing shows in syslog. The machine is always cleanly shut down - and the drives all appear fine from a SMART point of view - I'm just a bit concerned as to where the repairs came from - as ZFS doesn't seem to know (or be able to tell me) either :) -Kp From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 09:33:20 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B1E83106566B for ; Fri, 10 Jun 2011 09:33:20 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta13.emeryville.ca.mail.comcast.net (qmta13.emeryville.ca.mail.comcast.net [76.96.27.243]) by mx1.freebsd.org (Postfix) with ESMTP id 9AC518FC08 for ; Fri, 10 Jun 2011 09:33:20 +0000 (UTC) Received: from omta21.emeryville.ca.mail.comcast.net ([76.96.30.88]) by qmta13.emeryville.ca.mail.comcast.net with comcast id u9Wd1g0031u4NiLAD9ZJhh; Fri, 10 Jun 2011 09:33:18 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta21.emeryville.ca.mail.comcast.net with comcast id u9Z11g00F1t3BNj8h9Z1jp; Fri, 10 Jun 2011 09:33:02 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id E77DC102C19; Fri, 10 Jun 2011 02:33:18 -0700 (PDT) Date: Fri, 10 Jun 2011 02:33:18 -0700 From: Jeremy Chadwick To: Karl Pielorz Message-ID: <20110610093318.GA39276@icarus.home.lan> References: <729A0755FAEF480774EEF4AB@HexaDeca64.dmpriest.net.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <729A0755FAEF480774EEF4AB@HexaDeca64.dmpriest.net.uk> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS scrub 'repaired' pool with no chksum or read errors? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 09:33:20 -0000 On Fri, Jun 10, 2011 at 09:43:14AM +0100, Karl Pielorz wrote: > I'm running FreeBSD-8.2R amd64 w/4Gb of ECC RAM on a machine used > for 'offsite' backups (that are copied to it using zfs > send/receive). > > I scrub this machine every now and again (about once a month) - > recently this resulted in the following output after the scrub > completed: > > " > # zpool status > pool: vol > state: ONLINE > scrub: scrub completed after 2h49m with 0 errors on Thu Jun 9 > 17:09:31 2011 > config: > > NAME STATE READ WRITE CKSUM > vol ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada0 ONLINE 0 0 0 256K repaired > ada1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > > errors: No known data errors > " > > Should I be worried there was 256k of 'repairs' done, even though > there were no checksum errors, or read errors detected? > > The console logged no errors - and nothing shows in syslog. > > The machine is always cleanly shut down - and the drives all appear > fine from a SMART point of view - I'm just a bit concerned as to > where the repairs came from - as ZFS doesn't seem to know (or be > able to tell me) either :) ZFS experts please correct me, but my experience with this has shown me that the scrub itself found actual issues while analysing all data on the entire pool -- more specifically, I believe READ/WRITE/CKSUM are counters used for when errors are encountered during normal (read: non-scrub) operations. It's been a while since I've seen this happen, but have seen it on our Solaris 10 machines at my workplace. I've never been sure what it means; possibly signs of "bit rot"? If you're worried about your disk (ada0), please provide output from "smartctl -a /dev/ada0" and I'll be more than happy to review the output and provide you with any insights. I do believe you when you say it looks fine, but every model of disk is different in some regard. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 11:40:12 2011 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B75DB1065674 for ; Fri, 10 Jun 2011 11:40:12 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 8ECA38FC0C for ; Fri, 10 Jun 2011 11:40:12 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p5ABeCmt098164 for ; Fri, 10 Jun 2011 11:40:12 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p5ABeCga098163; Fri, 10 Jun 2011 11:40:12 GMT (envelope-from gnats) Date: Fri, 10 Jun 2011 11:40:12 GMT Message-Id: <201106101140.p5ABeCga098163@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org From: "Andrey V. Elsukov" Cc: Subject: Re: kern/157722: [geli] unable to newfs a geli encrypted partition X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "Andrey V. Elsukov" List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 11:40:12 -0000 The following reply was made to PR kern/157722; it has been noted by GNATS. From: "Andrey V. Elsukov" To: bug-followup@FreeBSD.org, rsimmons0@gmail.com Cc: Subject: Re: kern/157722: [geli] unable to newfs a geli encrypted partition Date: Fri, 10 Jun 2011 15:33:30 +0400 Hi, newfs(8) are trying to read a superblock area, but it gets EINVAL error code, because you have not initialized your geli provider. -- WBR, Andrey V. Elsukov From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 12:09:18 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D7F921065674 for ; Fri, 10 Jun 2011 12:09:18 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id AF6788FC0C for ; Fri, 10 Jun 2011 12:09:18 +0000 (UTC) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id 6543946B23; Fri, 10 Jun 2011 08:09:18 -0400 (EDT) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 052378A027; Fri, 10 Jun 2011 08:09:18 -0400 (EDT) From: John Baldwin To: freebsd-fs@freebsd.org Date: Fri, 10 Jun 2011 08:03:14 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110325; KDE/4.5.5; amd64; ; ) References: <20110531155046.GB9327@gumme.math.uni-bonn.de> <9BDA8959-C25B-4075-ACAF-3FC8F761A69F@math.uni-bonn.de> In-Reply-To: <9BDA8959-C25B-4075-ACAF-3FC8F761A69F@math.uni-bonn.de> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Message-Id: <201106100803.14226.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (bigwig.baldwin.cx); Fri, 10 Jun 2011 08:09:18 -0400 (EDT) Cc: Subject: Re: Re(try): softdep-related panic (allocdirect_merge) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 12:09:18 -0000 On Thursday, June 09, 2011 5:39:07 pm Edgar Fu=DF wrote: > More than a week ago, I asked about softdep-related panics I'm experienci= ng=20 on NetBSD to find out whether there may be a fix in FreeBSD not having been= =20 ported over. > Being unfamiliar with the FreeBSD groups, to me, not having received an=20 answer could mean any of the following: > -- nobody knows > -- only Kirk McKusick knows and he's busy/on vacation I'd vote for one of these. :) =2D-=20 John Baldwin From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 12:36:45 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A997B106564A for ; Fri, 10 Jun 2011 12:36:45 +0000 (UTC) (envelope-from kpielorz_lst@tdx.co.uk) Received: from mail.tdx.com (mail.tdx.com [62.13.128.18]) by mx1.freebsd.org (Postfix) with ESMTP id 4598A8FC0A for ; Fri, 10 Jun 2011 12:36:44 +0000 (UTC) Received: from HexaDeca64.dmpriest.net.uk (HPQuadro64.dmpriest.net.uk [62.13.130.30]) (authenticated bits=0) by mail.tdx.com (8.14.3/8.14.3/Kp) with ESMTP id p5ACahVs040941 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Fri, 10 Jun 2011 13:36:43 +0100 (BST) Date: Fri, 10 Jun 2011 13:36:42 +0100 From: Karl Pielorz To: Jeremy Chadwick Message-ID: In-Reply-To: <20110610093318.GA39276@icarus.home.lan> References: <729A0755FAEF480774EEF4AB@HexaDeca64.dmpriest.net.uk> <20110610093318.GA39276@icarus.home.lan> X-Mailer: Mulberry/4.0.8 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Cc: freebsd-fs@freebsd.org Subject: Re: ZFS scrub 'repaired' pool with no chksum or read errors? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 12:36:45 -0000 --On 10 June 2011 02:33 -0700 Jeremy Chadwick wrote: > ZFS experts please correct me, but my experience with this has shown me > that the scrub itself found actual issues while analysing all data on > the entire pool -- more specifically, I believe READ/WRITE/CKSUM are > counters used for when errors are encountered during normal (read: > non-scrub) operations. It's been a while since I've seen this happen, > but have seen it on our Solaris 10 machines at my workplace. I've never > been sure what it means; possibly signs of "bit rot"? I'm reasonably sure (and all the documentation I've seen) seems to indicate that the checksum/read error columns reflect errors found during either normal operations - or scrubs... I've run ZFS on some pretty ropey systems during testing, and it certainly seemed to 'tick up' the errors during scrubs. > If you're worried about your disk (ada0), please provide output from > "smartctl -a /dev/ada0" and I'll be more than happy to review the output > and provide you with any insights. I do believe you when you say it > looks fine, but every model of disk is different in some regard. I'm not overly worried about the disk or the errors - more curious as to why they showed without ticking up anything in the error columns - unless it's not meant to. I can email you the smart output, but there's no pending reallocations, all the SMART parameters are well above their thresholds - additionally smartd hasn't noticed anything 'changing' on the drive to alert about - the drive itself is also reasonably 'new' (and there's no evidence of anything being thrown in syslog/dmesg). If I get time later I might offline the drive and run a long test on it - if that does anything weird & wonderful, I'll take you up on your offer, and email you :-) But like I said, I'm not overly concerned, more curious ;) -Kp From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 12:59:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE8131065673 for ; Fri, 10 Jun 2011 12:59:39 +0000 (UTC) (envelope-from jwd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id D40588FC0C; Fri, 10 Jun 2011 12:59:39 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p5ACxdPT069765; Fri, 10 Jun 2011 12:59:39 GMT (envelope-from jwd@freefall.freebsd.org) Received: (from jwd@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p5ACxdKQ069764; Fri, 10 Jun 2011 12:59:39 GMT (envelope-from jwd) Date: Fri, 10 Jun 2011 12:59:39 +0000 From: John To: Rick Macklem Message-ID: <20110610125939.GA69616@FreeBSD.org> References: <20110609133805.GA78874@FreeBSD.org> <1069270455.338453.1307636209760.JavaMail.root@erie.cs.uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1069270455.338453.1307636209760.JavaMail.root@erie.cs.uoguelph.ca> User-Agent: Mutt/1.4.2.3i Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 12:59:40 -0000 ----- Rick Macklem's Original Message ----- > John De wrote: > > ----- Rick Macklem's Original Message ----- > > > John De wrote: > > > > Hi, > > > > > > > > We've been running some stress tests of the new nfs server. > > > > The system is at r222531 (head), 9 clients, two mounts each > > > > to the server: > > > > > > > > mount_nfs -o > > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > > > > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > > > > mount_nfs -o > > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > > > > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > > > > > > > > > > The system is still up & responsive, simply no nfs services > > > > are working. All (200) threads appear to be active, but not > > > > doing anything. The debugger is not compiled into this kernel. > > > > We can run any other tracing commands desired. We can also > > > > rebuild the kernel with the debugger enabled for any kernel > > > > debugging needed. > > > > > > > > --- long logs deleted --- > > > > > > How about a: > > > ps axHlww <-- With the "H" we'll see what the nfsd server threads > > > are up to > > > procstat -kka > > > > > > Oh, and a couple of nfsstats a few seconds apart. It's what the > > > counts > > > are changing by that might tell us what is going on. (You can use > > > "-z" > > > to zero them out, if you have an nfsstat built from recent sources.) > > > > > > Also, does a new NFS mount attempt against the server do anything? > > > > > > Thanks in advance for help with this, rick > > > > Hi Rick, > > > > Here's the output. In general, the nfsd processes appear to be in > > either nfsrvd_getcache(35 instances) or nfsrvd_updatecache(164) > > sleeping on > > "nfssrc". The server numbers don't appear to be moving. A showmount > > from a > > client system works, but a mount does not (see below). > > Please try the attached patch and let me know if it helps. When I looked > I found several places where the rc_flag variable was being fiddled without the > mutex held. I suspect one of these resulted in the RC_LOCKED flag not > getting cleared, so all the threads got stuck waiting on it. > > The patch is at: > http://people.freebsd.org/~rmacklem/cache.patch > in case it gets eaten by the list handler. > Thanks for digging into this, rick Hi Rick, Patch applied. The system has been up and running for about 16 hours now and so far it's still handling the load quite nicely. last pid: 15853; load averages: 5.36, 4.64, 4.48 up 0+16:08:16 08:48:07 72 processes: 7 running, 65 sleeping CPU: % user, % nice, % system, % interrupt, % idle Mem: 22M Active, 3345M Inact, 79G Wired, 9837M Buf, 11G Free Swap: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 2049 root 26 52 0 10052K 1712K CPU3 3 97:21 942.24% nfsd I'll followup again in 24 hours with another status. Any performance related numbers/knobs we can provide that might be of interest? Thanks Rick. -John From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 13:44:27 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9AD98106566B for ; Fri, 10 Jun 2011 13:44:27 +0000 (UTC) (envelope-from ml@my.gd) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 39B718FC20 for ; Fri, 10 Jun 2011 13:44:26 +0000 (UTC) Received: by wyf23 with SMTP id 23so2625322wyf.13 for ; Fri, 10 Jun 2011 06:44:26 -0700 (PDT) Received: by 10.216.68.2 with SMTP id k2mr7359871wed.90.1307711796526; Fri, 10 Jun 2011 06:16:36 -0700 (PDT) Received: from [10.132.35.76] ([92.90.16.49]) by mx.google.com with ESMTPS id w58sm1394062weq.25.2011.06.10.06.16.34 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 10 Jun 2011 06:16:35 -0700 (PDT) References: <4DF11BB8.5030805@dannysplace.net> In-Reply-To: <4DF11BB8.5030805@dannysplace.net> Mime-Version: 1.0 (iPhone Mail 8J2) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Message-Id: <4D212767-C64B-430A-8EED-17F0BE64E80C@my.gd> X-Mailer: iPhone Mail (8J2) From: Damien Fleuriot Date: Fri, 10 Jun 2011 15:16:29 +0200 To: "fbsd@dannysplace.net" Cc: "freebsd-fs@freebsd.org" Subject: Re: Getting access to checksums. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 13:44:27 -0000 On 9 Jun 2011, at 21:15, Dan Carroll wrote: > I'm currently working on a system that monitors file changes. > I'd like to calculate the checksums on each file both to see if a change h= as occured as well as looking for duplicate files. >=20 > I'm not sure what algorithm I'll end up using but I was wondering if it wa= s possible to get access to ZFS' checksumming? > Does it happen on a file level or is it block level only? And if it does,= is there an easy way to obtain this information from the system? >=20 > -D > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" You will want to look into mtree's man page. Does exactly what you wanna do :)= From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 18:07:48 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id ADE36106566B for ; Fri, 10 Jun 2011 18:07:48 +0000 (UTC) (envelope-from gibbs@FreeBSD.org) Received: from aslan.scsiguy.com (www.scsiguy.com [70.89.174.89]) by mx1.freebsd.org (Postfix) with ESMTP id 828148FC14 for ; Fri, 10 Jun 2011 18:07:48 +0000 (UTC) Received: from Justins-MacBook-Pro.local (207-225-98-3.dia.static.qwest.net [207.225.98.3]) (authenticated bits=0) by aslan.scsiguy.com (8.14.4/8.14.4) with ESMTP id p5AHWwKh061453 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 10 Jun 2011 11:32:59 -0600 (MDT) (envelope-from gibbs@FreeBSD.org) Message-ID: <4DF25544.3020301@FreeBSD.org> Date: Fri, 10 Jun 2011 11:32:52 -0600 From: "Justin T. Gibbs" Organization: The FreeBSD Project User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 To: fs@FreeBSD.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (aslan.scsiguy.com [70.89.174.89]); Fri, 10 Jun 2011 11:32:59 -0600 (MDT) Cc: Subject: Drop of spa_namespace lock in vdev_geom.c X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: gibbs@FreeBSD.org List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 18:07:48 -0000 Dropping and reacquiring the spa_namespace lock in vdev_geom_open() creates a lock order reversal with the spa_config locks. As the spa_config locks are not standard mutexes, witness will not warn about this issue. I only noticed this problem when debugging a ZFS deadlock. The deadlock can be triggered anytime that there are multiple insert/remove processes going on (e.g. vdev orphan processing while a fault management daemon is onlining a replacement device for some other vdev). I haven't noticed any issues with just holding the namespace lock for the duration of the open. Does anyone know why this lock drop was added in v28? Thanks, Justin From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 20:05:44 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C0671106564A for ; Fri, 10 Jun 2011 20:05:44 +0000 (UTC) (envelope-from kmacybsd@gmail.com) Received: from mail-vw0-f54.google.com (mail-vw0-f54.google.com [209.85.212.54]) by mx1.freebsd.org (Postfix) with ESMTP id 7B4A38FC08 for ; Fri, 10 Jun 2011 20:05:44 +0000 (UTC) Received: by vws18 with SMTP id 18so3550594vws.13 for ; Fri, 10 Jun 2011 13:05:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=W0a4WdmOGdbHnT8LQqq7T+2Kszgsh9+pGCex93K6P3g=; b=C9x1N4mwEM5F5A40Jn1pqbV2wZxO/2eKaCfG5zwaR3xwXNJYaZOyE7uiaxE/EserjZ sngYQYHgL68COgFvhutngty/m9qJuVkTYxq/y9NdM0+EkC9iRx5+W6xJcu9siToov57v sH6R+WJvjg5e9jdBwdjfaLsB0gz8cD4xK0eY8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=xOorUAayfmRg3K2vSrV84sKcSxTbQehfzm9TUraJNY2uQ4UCfIP5/Q6TuG+RMSP2Eo FqnAvm8y++0WMi92M2OAXck87NcpLjVhKNRe+zCsaJcMPZIklQDqpzeX7jHlvlyL7GJJ x3uOxMIiwDlCzkeewdQhcNv5W3WV2K3yhPt0k= MIME-Version: 1.0 Received: by 10.52.173.111 with SMTP id bj15mr784228vdc.122.1307734508789; Fri, 10 Jun 2011 12:35:08 -0700 (PDT) Sender: kmacybsd@gmail.com Received: by 10.52.187.74 with HTTP; Fri, 10 Jun 2011 12:35:08 -0700 (PDT) In-Reply-To: <4DF25544.3020301@FreeBSD.org> References: <4DF25544.3020301@FreeBSD.org> Date: Fri, 10 Jun 2011 21:35:08 +0200 X-Google-Sender-Auth: wqx2VMmFTZBtIeZPivkhMQgTtbM Message-ID: From: "K. Macy" To: gibbs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: fs@freebsd.org Subject: Re: Drop of spa_namespace lock in vdev_geom.c X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 20:05:44 -0000 On Fri, Jun 10, 2011 at 7:32 PM, Justin T. Gibbs wrote: > Dropping and reacquiring the spa_namespace lock in vdev_geom_open() > creates a lock order reversal with the spa_config locks. =A0As the > spa_config locks are not standard mutexes, witness will not warn > about this issue. The real problem is that WITNESS is disabled on the sx locks used for mutex compatibility in ZFS. This questionable decision has made debugging deadlocks quite painful on a number of occasions. I think this choice should be revisited and perhaps special workaround shims added for cases where cv_wait is called. -Kip From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 21:12:05 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 59A8A1065677 for ; Fri, 10 Jun 2011 21:12:05 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta14.westchester.pa.mail.comcast.net (qmta14.westchester.pa.mail.comcast.net [76.96.59.212]) by mx1.freebsd.org (Postfix) with ESMTP id 078BC8FC13 for ; Fri, 10 Jun 2011 21:12:04 +0000 (UTC) Received: from omta01.westchester.pa.mail.comcast.net ([76.96.62.11]) by qmta14.westchester.pa.mail.comcast.net with comcast id uM6a1g0060EZKEL5EMC5Tg; Fri, 10 Jun 2011 21:12:05 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta01.westchester.pa.mail.comcast.net with comcast id uMC31g0291t3BNj3MMC4gn; Fri, 10 Jun 2011 21:12:05 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 8E119102C19; Fri, 10 Jun 2011 14:12:02 -0700 (PDT) Date: Fri, 10 Jun 2011 14:12:02 -0700 From: Jeremy Chadwick To: Martin Matuska Message-ID: <20110610211202.GA52253@icarus.home.lan> References: <4DECB197.8020102@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DECB197.8020102@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 21:12:05 -0000 On Mon, Jun 06, 2011 at 12:53:11PM +0200, Martin Matuska wrote: > I have merged ZFS version 28 to 8-STABLE (revision 222741) Follow-up, since we're gradually upgrading our ZFS-based RELENG_8 servers to ZFSv28. Committers/those involved should see my very last paragraph. First, server upgrades: We've upgraded 2 of the 4 (including updating zfs and zpools), and so far things are working wonderfully. One of those 2 boxes is our NFS filer (which also does backups via rsync/rsnapshot), so that one's been a big worry-point of mine. The next rsync/rsnapshot runs tonight, so I'll be awake watching intently. All these systems are graphed via bsnmpd (memory, CPU, disk I/O, etc.). Second, performance tweaks: We're testing changing to our tweaks; the following directives have been commented out in our /boot/loader.conf files (e.g. we're now using prefetching): # vfs.zfs.prefetch_disable="1" And the following tunable has been removed completely, because it's now the default in ZFSv28 (see cvsweb, look at line 40 of the relevant commit for src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c): vfs.zfs.txg.timeout="5" Finally, talking a bit about dedup: In the ZFSv28 commit, did the zfs.8 man page get updated? I find no mention of the dedup property in the zfs(8) man page, and yes I did remove the old /usr/share/man/cat8/zfs.8.gz file. We tried using dedup on one of our systems, but within 10-15 minutes turned it off. I believe the added CPU overhead of dedup was causing the system to act "bursty" in other non-ZFS-related tasks; e.g. turn on dedup, then in a SSH window hold down the letter "q" indefinitely, then in another window do some ZFS I/O. The "q" would stall for 1-2 seconds at times (SSH connectivity was via direct private LAN, so network latency was not what we seen). Without dedup this behaviour wasn't seen at all. I'm happy to try any advice/patches on a locally-accessible box (e.g. private LAN, VGA console is right behind me, etc.). I have not tried tinkering with the following settings to find out what may relieve this issue. I'm referring to the sections titled "Trust or verify" and "Selecting a checksum" on Jeff's blog here: http://blogs.oracle.com/bonwick/entry/zfs_dedup All in all, thank you everyone for the work that's gone in to MFC'ing this to RELENG_8. I really do mean that. I'm a harsh bastard on the mailing lists, no question about it. But I always appreciate people doing the grunt work that I myself cannot do (over my head). I state this seriously: if any of you folks who participated in this MFC have donation links (PayPal, etc.), please give them to me. You absolutely will see some worthwhile kick-backs for your efforts, with no strings attached. Just my way of saying "thank you". -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 21:13:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A6B451065672 for ; Fri, 10 Jun 2011 21:13:11 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 670048FC1C for ; Fri, 10 Jun 2011 21:13:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap8EAP6H8k2DaFvO/2dsb2JhbABGDBuELqJ7tAyQYYErgXGBfYEKBJErj3M X-IronPort-AV: E=Sophos;i="4.65,349,1304308800"; d="scan'208";a="123629278" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 10 Jun 2011 17:13:10 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 4360CB3F10; Fri, 10 Jun 2011 17:13:10 -0400 (EDT) Date: Fri, 10 Jun 2011 17:13:10 -0400 (EDT) From: Rick Macklem To: John Message-ID: <656124669.413146.1307740390221.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20110610125939.GA69616@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE7 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: New NFS server stress test hang X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 21:13:11 -0000 John De wrote: > ----- Rick Macklem's Original Message ----- > > John De wrote: > > > ----- Rick Macklem's Original Message ----- > > > > John De wrote: > > > > > Hi, > > > > > > > > > > We've been running some stress tests of the new nfs server. > > > > > The system is at r222531 (head), 9 clients, two mounts each > > > > > to the server: > > > > > > > > > > mount_nfs -o > > > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=2 > > > > > ${servera}:/vol/datsrc /c/$servera/vol/datsrc > > > > > mount_nfs -o > > > > > udp,nfsv3,rsize=32768,wsize=32768,noatime,nolockd,acregmin=1,acregmax=2,acdirmin=1,acdirmax=2,negnametimeo=0 > > > > > ${servera}:/vol/datgen /c/$servera/vol/datgen > > > > > > > > > > > > > > > The system is still up & responsive, simply no nfs services > > > > > are working. All (200) threads appear to be active, but not > > > > > doing anything. The debugger is not compiled into this kernel. > > > > > We can run any other tracing commands desired. We can also > > > > > rebuild the kernel with the debugger enabled for any kernel > > > > > debugging needed. > > > > > > > > > > --- long logs deleted --- > > > > > > > > How about a: > > > > ps axHlww <-- With the "H" we'll see what the nfsd server > > > > threads > > > > are up to > > > > procstat -kka > > > > > > > > Oh, and a couple of nfsstats a few seconds apart. It's what the > > > > counts > > > > are changing by that might tell us what is going on. (You can > > > > use > > > > "-z" > > > > to zero them out, if you have an nfsstat built from recent > > > > sources.) > > > > > > > > Also, does a new NFS mount attempt against the server do > > > > anything? > > > > > > > > Thanks in advance for help with this, rick > > > > > > Hi Rick, > > > > > > Here's the output. In general, the nfsd processes appear to be in > > > either nfsrvd_getcache(35 instances) or nfsrvd_updatecache(164) > > > sleeping on > > > "nfssrc". The server numbers don't appear to be moving. A > > > showmount > > > from a > > > client system works, but a mount does not (see below). > > > > Please try the attached patch and let me know if it helps. When I > > looked > > I found several places where the rc_flag variable was being fiddled > > without the > > mutex held. I suspect one of these resulted in the RC_LOCKED flag > > not > > getting cleared, so all the threads got stuck waiting on it. > > > > The patch is at: > > http://people.freebsd.org/~rmacklem/cache.patch > > in case it gets eaten by the list handler. > > Thanks for digging into this, rick > > Hi Rick, > > Patch applied. The system has been up and running for about > 16 hours now and so far it's still handling the load quite nicely. > > last pid: 15853; load averages: 5.36, 4.64, 4.48 up 0+16:08:16 > 08:48:07 > 72 processes: 7 running, 65 sleeping > CPU: % user, % nice, % system, % interrupt, % idle > Mem: 22M Active, 3345M Inact, 79G Wired, 9837M Buf, 11G Free > Swap: > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND > 2049 root 26 52 0 10052K 1712K CPU3 3 97:21 942.24% nfsd > > I'll followup again in 24 hours with another status. > > Any performance related numbers/knobs we can provide that might > be of interest? > Not really anything I can think of. You obviously have hardware that runs well or NFS over UDP with 32K rsize/wsize wouldn't work. (I am not so lucky. My environment drops enough packets that NFS over UDP is completely unusable.) It would be interesting to see how your above UDP mounts compare with using TCP and default (should be 64K) rsize/wsize works, at some point. And if you really want to try something on the bleeding edge, you could apply this patch to the server, which enables use of LK_SHARED locked vnodes for read operations. It has only been lightly tested and I really doubt it will go in 9.0, but if you could test it, that would be nice.:-) http://people.freebsd.org/~rmacklem/lkshared.patch Thanks for testing this, rick ps: Hopefully you'll have some insight into how long you need to run with the patch before it seems that it fixed your problem? (I know, since it is probably an SMP race, you can never be sure.;-) From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 21:24:06 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DD186106566B for ; Fri, 10 Jun 2011 21:24:06 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id A3C908FC12 for ; Fri, 10 Jun 2011 21:24:06 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id p5ALO5af017885; Fri, 10 Jun 2011 16:24:05 -0500 (CDT) Date: Fri, 10 Jun 2011 16:24:05 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Jeremy Chadwick In-Reply-To: <20110610211202.GA52253@icarus.home.lan> Message-ID: References: <4DECB197.8020102@FreeBSD.org> <20110610211202.GA52253@icarus.home.lan> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Fri, 10 Jun 2011 16:24:05 -0500 (CDT) Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 21:24:06 -0000 On Fri, 10 Jun 2011, Jeremy Chadwick wrote: > > We tried using dedup on one of our systems, but within 10-15 minutes > turned it off. I believe the added CPU overhead of dedup was causing > the system to act "bursty" in other non-ZFS-related tasks; e.g. turn on > dedup, then in a SSH window hold down the letter "q" indefinitely, then > in another window do some ZFS I/O. The "q" would stall for 1-2 seconds > at times (SSH connectivity was via direct private LAN, so network > latency was not what we seen). Without dedup this behaviour wasn't seen > at all. I'm happy to try any advice/patches on a locally-accessible > box (e.g. private LAN, VGA console is right behind me, etc.). Dedup can require a huge amount of RAM, or a dedicated L2ARC SSD, depending on the size of your storage. You should not enable it unless you are prepared for the consequences. Solaris 11 Express does not admit to supporting dedup even though it can be enabled in previous OpenSolaris and is supported in Oracle's NAS products (which run a variant of Solaris 11). Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 21:42:08 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D71E21065680 for ; Fri, 10 Jun 2011 21:42:08 +0000 (UTC) (envelope-from gibbs@scsiguy.com) Received: from aslan.scsiguy.com (mail.scsiguy.com [70.89.174.89]) by mx1.freebsd.org (Postfix) with ESMTP id 8E4E38FC15 for ; Fri, 10 Jun 2011 21:42:08 +0000 (UTC) Received: from Justins-MacBook-Pro.local (207-225-98-3.dia.static.qwest.net [207.225.98.3]) (authenticated bits=0) by aslan.scsiguy.com (8.14.4/8.14.4) with ESMTP id p5AL9B1O062611 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO) for ; Fri, 10 Jun 2011 15:09:11 -0600 (MDT) (envelope-from gibbs@scsiguy.com) Message-ID: <4DF287F0.8080301@scsiguy.com> Date: Fri, 10 Jun 2011 15:09:04 -0600 From: "Justin T. Gibbs" User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.17) Gecko/20110414 Thunderbird/3.1.10 MIME-Version: 1.0 To: fs@FreeBSD.org Content-Type: multipart/mixed; boundary="------------050508040101090501090209" X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.6 (aslan.scsiguy.com [70.89.174.89]); Fri, 10 Jun 2011 15:09:11 -0600 (MDT) Cc: Subject: [CFT] Fix DEVFS aliases in subdirectories. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 21:42:09 -0000 This is a multi-part message in MIME format. --------------050508040101090501090209 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit DEVFS aliases don't work correctly when generated symlinks are not in the DEVFS root. For example, here's a system that provides aliases to devices based on their physical path in a SAS enclosure: ls -l /dev/enc\@n50015b2080006ef9/type\@0/slot\@2/elmdesc\@Disk_02/* lrwxr-xr-x 1 root wheel 15 Jun 9 22:44 /dev/enc@n50015b2080006ef9/type@0/slot@2/elmdesc@Disk_02/da6 -> ../../../../da6 lrwxr-xr-x 1 root wheel 17 Jun 9 22:44 /dev/enc@n50015b2080006ef9/type@0/slot@2/elmdesc@Disk_02/pass7 -> ../../../../pass7 The aliased devs are far from the root and so must have "../" entries added in order to function correctly. I considered making the symlink paths absolute, but that complicates jail handling. Are there any objections to the attached change? Thanks, Justin --------------050508040101090501090209 Content-Type: text/plain; x-mac-type="0"; x-mac-creator="0"; name="devfs_symlinks.diff" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="devfs_symlinks.diff" Change 480877 by justing@justing-ns1 on 2011/03/03 15:48:17 sys/fs/devfs/devfs_devs.c: Correct the devfs alias code to insert the correct number of "../"s in the target of a symlink when creating them in subdirectories of a devfs. Affected files ... ... //depot/SpectraBSD/head/sys/fs/devfs/devfs_devs.c#6 edit Differences ... ==== //depot/SpectraBSD/head/sys/fs/devfs/devfs_devs.c#6 (text) ==== @@ -488,7 +488,7 @@ struct devfs_dirent *de; struct devfs_dirent *dd; struct cdev *pdev; - int de_flags, j; + int de_flags; char *q, *s; sx_assert(&dm->dm_lock, SX_XLOCKED); @@ -584,14 +584,43 @@ de = devfs_newdirent(s, q - s); if (cdp->cdp_c.si_flags & SI_ALIAS) { + char *slash; + int depth; + int namelen; + int buflen; + int i; + + /* + * Determine depth of the link. + */ + slash = cdp->cdp_c.si_name; + depth = 0; + while ((slash = strchr(slash, '/')) != NULL) { + slash++; + depth++; + } + de->de_uid = 0; de->de_gid = 0; de->de_mode = 0755; de->de_dirent->d_type = DT_LNK; pdev = cdp->cdp_c.si_parent; - j = strlen(pdev->si_name) + 1; - de->de_symlink = malloc(j, M_DEVFS, M_WAITOK); - bcopy(pdev->si_name, de->de_symlink, j); + namelen = strlen(pdev->si_name) + 1; + buflen = (depth * 3/* "../" */) + namelen; + de->de_symlink = malloc(buflen, M_DEVFS, M_WAITOK); + + /* + * Our parent's path is relative to the root, + * so our symlinked path must be relative to + * the root. + */ + slash = de->de_symlink; + for (i = 0; i < depth; i++) { + bcopy("../", slash, 3); + slash += 3; + } + + bcopy(pdev->si_name, slash, namelen); } else { de->de_uid = cdp->cdp_c.si_uid; de->de_gid = cdp->cdp_c.si_gid; --------------050508040101090501090209-- From owner-freebsd-fs@FreeBSD.ORG Fri Jun 10 21:49:12 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D65DE106564A; Fri, 10 Jun 2011 21:49:12 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id F20568FC14; Fri, 10 Jun 2011 21:49:11 +0000 (UTC) Received: by bwz12 with SMTP id 12so3622477bwz.13 for ; Fri, 10 Jun 2011 14:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=3aAe6MM1dl4CXTn2MIAEerYJVKmlI2BNVOono2h/Amw=; b=miV1926b6wDpqXh1toN9GxTS0XIPCzIF1J4dnMMZ6cZdb067MFuvEEtPOdu4Z6uVst yDxfeP+/6PxGJpY738Ugm8ncNzeSYcYCAF9SucWKM0bfzm/svQ6dx7bkcaMWigH7heC4 7m2tbFGo3xy0JmVBHN0J2YuRkf/5wox5f6IsQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; b=FB1HQLUVOJGaItJYPYhzcOa76O+hP76qoGONcFvYeulscsDNosSIYEVu9DkF9s1KZ7 KooBshtiileUmptmNk1lB3yL3zNHF2YlU9Vj6BLjNJN+pRkrFrX1gjeZF+XNz4vSQ9VH LcASyftvnTcBYe3tAwOGvUDHkyj80q60cSgk4= Received: by 10.204.232.73 with SMTP id jt9mr2211337bkb.214.1307741139831; Fri, 10 Jun 2011 14:25:39 -0700 (PDT) Received: from limbo.lan ([195.225.157.86]) by mx.google.com with ESMTPS id j7sm2931635bka.8.2011.06.10.14.25.37 (version=SSLv3 cipher=OTHER); Fri, 10 Jun 2011 14:25:38 -0700 (PDT) Message-ID: <4DF28BCF.3060008@gmail.com> Date: Sat, 11 Jun 2011 00:25:35 +0300 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; uk-UA; rv:1.9.2.17) Gecko/20110509 Thunderbird/3.1.10 MIME-Version: 1.0 To: Martin Matuska References: <4DECB197.8020102@FreeBSD.org> In-Reply-To: <4DECB197.8020102@FreeBSD.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: HEADS UP: ZFS v28 merged to 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jun 2011 21:49:12 -0000 06.06.2011 13:53, Martin Matuska написав(ла): > Hi, > > I have merged ZFS version 28 to 8-STABLE (revision 222741) > > New major features: > > - data deduplication Am I missing something? How about using fletcher[24] for dedup? -- Sphinx of black quartz judge my vow. From owner-freebsd-fs@FreeBSD.ORG Sat Jun 11 07:25:03 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 665FE1065673 for ; Sat, 11 Jun 2011 07:25:03 +0000 (UTC) (envelope-from jh@FreeBSD.org) Received: from gw01.mail.saunalahti.fi (gw01.mail.saunalahti.fi [195.197.172.115]) by mx1.freebsd.org (Postfix) with ESMTP id 1E0368FC17 for ; Sat, 11 Jun 2011 07:25:02 +0000 (UTC) Received: from jh (a91-153-115-208.elisa-laajakaista.fi [91.153.115.208]) by gw01.mail.saunalahti.fi (Postfix) with SMTP id EDD5F15157F; Sat, 11 Jun 2011 10:09:40 +0300 (EEST) Date: Sat, 11 Jun 2011 10:09:40 +0300 From: Jaakko Heinonen To: "Justin T. Gibbs" Message-ID: <20110611070939.GC10793@jh> References: <4DF287F0.8080301@scsiguy.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DF287F0.8080301@scsiguy.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: fs@FreeBSD.org Subject: Re: [CFT] Fix DEVFS aliases in subdirectories. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Jun 2011 07:25:03 -0000 Hi, On 2011-06-10, Justin T. Gibbs wrote: > The aliased devs are far from the root and so must have "../" entries > added in order to function correctly. I considered making the symlink > paths absolute, but that complicates jail handling. > > Are there any objections to the attached change? > @@ -584,14 +584,43 @@ > > de = devfs_newdirent(s, q - s); > if (cdp->cdp_c.si_flags & SI_ALIAS) { > + char *slash; > + int depth; > + int namelen; > + int buflen; > + int i; style(9) discourages putting declarations inside blocks. Please consider putting symlink name generation to its own helper function. devfs_populate_loop() has already become too large. -- Jaakko From owner-freebsd-fs@FreeBSD.ORG Sat Jun 11 11:14:19 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 02E28106564A for ; Sat, 11 Jun 2011 11:14:19 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id D8A3E8FC08 for ; Sat, 11 Jun 2011 11:14:17 +0000 (UTC) Received: from deviant.kiev.zoral.com.ua (root@deviant.kiev.zoral.com.ua [10.1.1.148]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id p5BAgI20061345 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 11 Jun 2011 13:42:18 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4) with ESMTP id p5BAgIWO016972; Sat, 11 Jun 2011 13:42:18 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4/Submit) id p5BAgFMQ016971; Sat, 11 Jun 2011 13:42:16 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 11 Jun 2011 13:42:15 +0300 From: Kostik Belousov To: Jaakko Heinonen Message-ID: <20110611104215.GG48734@deviant.kiev.zoral.com.ua> References: <4DF287F0.8080301@scsiguy.com> <20110611070939.GC10793@jh> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="13nVYX24B/ovCM08" Content-Disposition: inline In-Reply-To: <20110611070939.GC10793@jh> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, DNS_FROM_OPENWHOIS autolearn=no version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: "Justin T. Gibbs" , fs@freebsd.org Subject: Re: [CFT] Fix DEVFS aliases in subdirectories. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Jun 2011 11:14:19 -0000 --13nVYX24B/ovCM08 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jun 11, 2011 at 10:09:40AM +0300, Jaakko Heinonen wrote: >=20 > Hi, >=20 > On 2011-06-10, Justin T. Gibbs wrote: > > The aliased devs are far from the root and so must have "../" entries > > added in order to function correctly. I considered making the symlink > > paths absolute, but that complicates jail handling. Alternatively, you might change devfs_readlink, prepending the absolute symlinks with the statfs.f_mntonname. This indeed would have to consider the case of a jailed process. No, I am not requesting this. > >=20 > > Are there any objections to the attached change? >=20 > > @@ -584,14 +584,43 @@ > > =20 > > de =3D devfs_newdirent(s, q - s); > > if (cdp->cdp_c.si_flags & SI_ALIAS) { > > + char *slash; > > + int depth; > > + int namelen; > > + int buflen; > > + int i; >=20 > style(9) discourages putting declarations inside blocks. Please consider > putting symlink name generation to its own helper function. > devfs_populate_loop() has already become too large. --13nVYX24B/ovCM08 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk3zRocACgkQC3+MBN1Mb4gfeACg7gCPLo72qG3WH1Yo6RMQ3m0s PM8AoKCsMOCF1VpynIw3shPgPnj7wV35 =QyX2 -----END PGP SIGNATURE----- --13nVYX24B/ovCM08-- From owner-freebsd-fs@FreeBSD.ORG Sat Jun 11 17:50:11 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 025AC1065673 for ; Sat, 11 Jun 2011 17:50:11 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id 86D3A8FC0C for ; Sat, 11 Jun 2011 17:50:06 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.4/8.14.4) with ESMTP id p5BHTDbY065986 for ; Sat, 11 Jun 2011 21:29:13 +0400 (MSD) (envelope-from marck@rinet.ru) Date: Sat, 11 Jun 2011 21:29:13 +0400 (MSD) From: Dmitry Morozovsky To: freebsd-fs@FreeBSD.org Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.7 (woozle.rinet.ru [0.0.0.0]); Sat, 11 Jun 2011 21:29:13 +0400 (MSD) Cc: Subject: stable/8-amd64 on ZFS as a vSphere backend (fwd) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Jun 2011 17:50:11 -0000 [After a bit of thoughts, I decided that -fs@ would be more appropriate place for this, sorry for extra noise] Dear colleagues, any particular hints to tune FreeBSD NFS server for efficient work as a VMware vSphere backend? For now, I set up 16G amd64 with ZFSv28, 4k recordsize and no other particular tuning, roundrobin lagg on 2 ems, mtu 9000. There are 4 WD RE3 1T disks as AHCI, on raid10 ZFS config. However, performance seems to be far from optimal (more deep testing is due, but at least average latency is too far, more than 100 ms). Local tests from zvol's (yes, that's not directly comparable but still) provide more than 500 MB/s on single/dual thread. Thanks in advance! -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Sat Jun 11 19:27:52 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EA153106566B; Sat, 11 Jun 2011 19:27:52 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from mail.zoral.com.ua (mx0.zoral.com.ua [91.193.166.200]) by mx1.freebsd.org (Postfix) with ESMTP id 669468FC08; Sat, 11 Jun 2011 19:27:51 +0000 (UTC) Received: from deviant.kiev.zoral.com.ua (root@deviant.kiev.zoral.com.ua [10.1.1.148]) by mail.zoral.com.ua (8.14.2/8.14.2) with ESMTP id p5BJRnWY006027 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 11 Jun 2011 22:27:49 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: from deviant.kiev.zoral.com.ua (kostik@localhost [127.0.0.1]) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4) with ESMTP id p5BJRnhw019014; Sat, 11 Jun 2011 22:27:49 +0300 (EEST) (envelope-from kostikbel@gmail.com) Received: (from kostik@localhost) by deviant.kiev.zoral.com.ua (8.14.4/8.14.4/Submit) id p5BJRmxe019013; Sat, 11 Jun 2011 22:27:48 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: deviant.kiev.zoral.com.ua: kostik set sender to kostikbel@gmail.com using -f Date: Sat, 11 Jun 2011 22:27:48 +0300 From: Kostik Belousov To: Jaakko Heinonen , "Justin T. Gibbs" , fs@freebsd.org Message-ID: <20110611192748.GM48734@deviant.kiev.zoral.com.ua> References: <4DF287F0.8080301@scsiguy.com> <20110611070939.GC10793@jh> <20110611104215.GG48734@deviant.kiev.zoral.com.ua> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="1o9f3WYkqpI1o9iD" Content-Disposition: inline In-Reply-To: <20110611104215.GG48734@deviant.kiev.zoral.com.ua> User-Agent: Mutt/1.4.2.3i X-Virus-Scanned: clamav-milter 0.95.2 at skuns.kiev.zoral.com.ua X-Virus-Status: Clean X-Spam-Status: No, score=-3.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, DNS_FROM_OPENWHOIS autolearn=no version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on skuns.kiev.zoral.com.ua Cc: Subject: Re: [CFT] Fix DEVFS aliases in subdirectories. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 11 Jun 2011 19:27:53 -0000 --1o9f3WYkqpI1o9iD Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sat, Jun 11, 2011 at 01:42:15PM +0300, Kostik Belousov wrote: > On Sat, Jun 11, 2011 at 10:09:40AM +0300, Jaakko Heinonen wrote: > >=20 > > Hi, > >=20 > > On 2011-06-10, Justin T. Gibbs wrote: > > > The aliased devs are far from the root and so must have "../" entries > > > added in order to function correctly. I considered making the symlink > > > paths absolute, but that complicates jail handling. > Alternatively, you might change devfs_readlink, prepending the absolute > symlinks with the statfs.f_mntonname. This indeed would have to consider > the case of a jailed process. No, I am not requesting this. Just remembered, sometimes links on the devfs point out of devfs. Look, for instance, on the /dev/log symlink on the running system. pooma% ls -l /dev/log lrwxr-xr-x 1 root wheel 12 Jun 11 21:08 /dev/log -> /var/run/log Wouldn't the patch break it ? >=20 > > >=20 > > > Are there any objections to the attached change? > >=20 > > > @@ -584,14 +584,43 @@ > > > =20 > > > de =3D devfs_newdirent(s, q - s); > > > if (cdp->cdp_c.si_flags & SI_ALIAS) { > > > + char *slash; > > > + int depth; > > > + int namelen; > > > + int buflen; > > > + int i; > >=20 > > style(9) discourages putting declarations inside blocks. Please consider > > putting symlink name generation to its own helper function. > > devfs_populate_loop() has already become too large. --1o9f3WYkqpI1o9iD Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (FreeBSD) iEYEARECAAYFAk3zwbMACgkQC3+MBN1Mb4hnEQCdEGXGwWNZvFuIz9enb+ikMmTt SygAoK3KEWyUPrpaAH8dHWLaCDH1esNW =hRw0 -----END PGP SIGNATURE----- --1o9f3WYkqpI1o9iD--