From owner-freebsd-fs@FreeBSD.ORG Mon Dec 9 11:06:46 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E191019D for ; Mon, 9 Dec 2013 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C1DFB1E75 for ; Mon, 9 Dec 2013 11:06:45 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.7/8.14.7) with ESMTP id rB9B6jU5070986 for ; Mon, 9 Dec 2013 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.7/8.14.7/Submit) id rB9B6j98070981 for freebsd-fs@FreeBSD.org; Mon, 9 Dec 2013 11:06:45 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 9 Dec 2013 11:06:45 GMT Message-Id: <201312091106.rB9B6j98070981@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Dec 2013 11:06:46 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/184478 fs [smbfs] mount_smbfs cannot read/write files o kern/182570 fs [zfs] [patch] ZFS panic in receive o kern/182536 fs [zfs] zfs deadlock o kern/181966 fs [zfs] Kernel panic in ZFS I/O: solaris assert: BP_EQUA o kern/181834 fs [nfs] amd mounting NFS directories can drive a dead-lo o kern/181565 fs [swap] Problem with vnode-backed swap space. o kern/181377 fs [zfs] zfs recv causes an inconsistant pool o kern/181281 fs [msdosfs] stack trace after successfull 'umount /mnt' o kern/181082 fs [fuse] [ntfs] Write to mounted NTFS filesystem using F o kern/180979 fs [netsmb][patch]: Fix large files handling o kern/180876 fs [zfs] [hast] ZFS with trim,bio_flush or bio_delete loc o kern/180678 fs [NFS] succesfully exported filesystems being reported o kern/180438 fs [smbfs] [patch] mount_smbfs fails on arm because of wr p kern/180236 fs [zfs] [nullfs] Leakage free space using ZFS with nullf o kern/178854 fs [ufs] FreeBSD kernel crash in UFS s kern/178467 fs [zfs] [request] Optimized Checksum Code for ZFS o kern/178412 fs [smbfs] Coredump when smbfs mounted o kern/178388 fs [zfs] [patch] allow up to 8MB recordsize o kern/178387 fs [zfs] [patch] sparse files performance improvements o kern/178349 fs [zfs] zfs scrub on deduped data could be much less see o kern/178329 fs [zfs] extended attributes leak o kern/178238 fs [nullfs] nullfs don't release i-nodes on unlink. f kern/178231 fs [nfs] 8.3 nfsv4 client reports "nfsv4 client/server pr o kern/177985 fs [zfs] disk usage problem when copying from one zfs dat o kern/177971 fs [nfs] FreeBSD 9.1 nfs client dirlist problem w/ nfsv3, o kern/177966 fs [zfs] resilver completes but subsequent scrub reports o kern/177658 fs [ufs] FreeBSD panics after get full filesystem with uf o kern/177536 fs [zfs] zfs livelock (deadlock) with high write-to-disk o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175449 fs [unionfs] unionfs and devfs misbehaviour o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172942 fs [smbfs] Unmounting a smb mount when the server became o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic f kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server o kern/145750 fs [unionfs] [hang] unionfs locks the machine s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141950 fs [unionfs] [lor] ufs/unionfs/ufs Lock order reversal o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/137588 fs [unionfs] [lor] LOR nfs/ufs/nfs o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126973 fs [unionfs] [hang] System hang with unionfs and init chr o kern/126553 fs [unionfs] unionfs move directory problem 2 (files appe o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o bin/123574 fs [unionfs] df(1) -t option destroys info for unionfs (a o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o kern/121385 fs [unionfs] unionfs cross mount -> kernel panic o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/67326 fs [msdosfs] crash after attempt to mount write protected o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t o kern/9619 fs [nfs] Restarting mountd kills existing mounts 337 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 9 18:24:03 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C0D24171 for ; Mon, 9 Dec 2013 18:24:03 +0000 (UTC) Received: from mail-pa0-x234.google.com (mail-pa0-x234.google.com [IPv6:2607:f8b0:400e:c03::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 8EED712D0 for ; Mon, 9 Dec 2013 18:24:03 +0000 (UTC) Received: by mail-pa0-f52.google.com with SMTP id ld10so295676pab.39 for ; Mon, 09 Dec 2013 10:24:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=oHtLhczecBYbWHg1G5d+eSuY5LJTscdRtjYhggBYZqY=; b=Q3va9vwYHMryVqFzvq2WSALWfOtupMSams2DiGDEvn70P3spW3Ya7HnPH7/EcXHqlV Rk54YyTt4Os8GFs+Vw6AckSpAE/+hosc03Umm7JsCVmCrcnDvkvfl9189O5CzCDHYM9n dibYZfk2vVZiiBUHvlLf+xjfv2Ieh70yicUlI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=oHtLhczecBYbWHg1G5d+eSuY5LJTscdRtjYhggBYZqY=; b=OqTCpf5FihDmwNKNl7no3DybaXA8FtQ0XZTk/lsldCN+zDIQ4Aa6wx7CqiY52u0zxH ICmqwO3Y+hvi4x661xZLdI1rFPYJ2NySwGPwAY95mcObO5g36g4ppM7uotUW+epTErDp gjiGOaePfe3BYrLOYrIT8v15u0WmX3/mRbwLImNS7xtUPMHSumvAgRY+aFu4eNXZHNgS RJuq0RvwvUtb4bFicEWdmS12x3FvAbPDUyuyNj+Gx09jtHvivWsKqTCDGUBiyOgVFzFL vG+a8yMyZhYOQDtiYvfTUNGJIsDVfF0+92cTYyo2Rxs19QtN6R0AmJsxanu/bpdmDQVV 03Cg== X-Gm-Message-State: ALoCoQkKow8tP/M2dzvfNZVeOkLPlJZLVuAnBnLSJFem6fufdEYfG2lZTzBa9it6LHvpKUL0OKdM MIME-Version: 1.0 X-Received: by 10.68.218.165 with SMTP id ph5mr22619587pbc.11.1386613443097; Mon, 09 Dec 2013 10:24:03 -0800 (PST) Received: by 10.70.75.234 with HTTP; Mon, 9 Dec 2013 10:24:03 -0800 (PST) In-Reply-To: <52A5F80D.8020206@cse.yorku.ca> References: <52A5F80D.8020206@cse.yorku.ca> Date: Mon, 9 Dec 2013 10:24:03 -0800 Message-ID: Subject: Re: question about zfs written property From: Matthew Ahrens To: Jason Keltz Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs , Eric.Shrock@delphix.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Dec 2013 18:24:03 -0000 Jason, I'm cc'ing the freebsd mailing list as well. In general that is a better forum for questions about how to use ZFS. On Mon, Dec 9, 2013 at 9:04 AM, Jason Keltz wrote: > Hi.. > > I saw your names on the feature addition in illumos for the "written" > property for ZFS: > > https://www.illumos.org/issues/1645 > > I had a question and was hoping you might have a moment to answer. > > I'm rsyncing data from a Linux-based system to a ZFS backup and archive > server. It's running FreeBSD, but it's the same ZFS code base as illumos. > I'm seeing (what I think are) some weird numbers looking at ZFS written > property ... > > For example: > > Sat Dec 7 01:05:00 EST 2013 sync start rsync://backup@forest-mrpriv/home9 > /local/backup/home9 (189G/264G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync finish rsync://backup@forest-mrpriv/home9 > /local/backup/home9 (190G/265G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync elapsed 00h:28m:20s > rsync://backup@forest-mrpriv/home9 /local/backup/home9, 514M unarchived > Sat Dec 7 06:32:58 EST 2013 archive create home9 daily 20131207 as > pool1/backup/home9@20131207, 518M > > In the third line, where you see "514M unarchived", I write out the > property of "written" after the rsync completes. However, when the archive > (just a snapshot) runs (hours later), there's 4 MB more data!? Nothing > touches the data after the rsync completes. Both lines are probing the > same property on the same dataset. How can they get a different result? > If you are getting the "written" property just after the rsync completes, it's possible that there is still some data "in flight" inside ZFS. If you run "sync", that should flush out all the dirty data and update the space accounting. Unfortunately this is only documented in the description of the "used" property, we should add similar qualifiers to "available", "referenced", "written", "logicalreferenced", etc.: The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Com- mitting a change to a disk using fsync(3c) or O_SYNC does not necessarily guarantee that the space usage information is updated immediately. > > On another note, it seems there's also a minor dependency in the first and > second lines as well. (189G/264G/1.41x) refers to > (used/lused/compressratio) properties. I don't know how they get rounded, > but if there's 500 MB added, I would have thought that 189 should have been > something like 189.5 beforehand? But that's a different issue. > The rounding generally 3 significant digits, and always rounds down. See zfs_nicenum(). Use "zfs get -p" if you want exact numbers. > > Sometimes, the numbers are the same, like here... > > Sat Dec 7 01:33:21 EST 2013 sync start rsync://backup@mint-mrpriv/home10 > /local/backup/home10 (143G/221G/1.60x) > Sat Dec 7 04:49:17 EST 2013 sync finish rsync://backup@mint-mrpriv/home10 > /local/backup/home10 (144G/222G/1.60x) > Sat Dec 7 04:49:17 EST 2013 sync elapsed 03h:15m:56s > rsync://backup@mint-mrpriv/home10 /local/backup/home10, 485M unarchived > Sat Dec 7 06:33:01 EST 2013 archive create home10 daily 20131207 as > pool1/backup/home10@20131207, 485M > > Other times they are 1 off again ... > > Sat Dec 7 04:49:23 EST 2013 sync start rsync://backup@forest-mrpriv/dept > /local/backup/dept (89.6G/144G/1.68x) > Sat Dec 7 05:19:20 EST 2013 sync finish rsync://backup@forest-mrpriv/dept > /local/backup/dept (89.7G/144G/1.68x) > Sat Dec 7 05:19:20 EST 2013 sync elapsed 00h:29m:57s > rsync://backup@forest-mrpriv/dept /local/backup/dept, 127M unarchived > Sat Dec 7 06:32:59 EST 2013 archive create dept daily 20131207 as > pool1/backup/dept@20131207, 128M > > Here's a discrepancy again ... > > Sat Dec 7 05:45:46 EST 2013 sync start rsync://backup@bronze-mrpriv/mysqlbackup.bronze > /local/backup/mysqlbackup.bronze (20.4M/20.6M/1.01x) > Sat Dec 7 05:45:47 EST 2013 sync finish rsync://backup@bronze-mrpriv/mysqlbackup.bronze > /local/backup/mysqlbackup.bronze (20.4M/20.6M/1.01x) > Sat Dec 7 05:45:47 EST 2013 sync elapsed 00h:00m:01s > rsync://backup@bronze-mrpriv/mysqlbackup.bronze /local/backup/mysqlbackup.bronze, > no new data > Sat Dec 7 06:33:01 EST 2013 archive create mysqlbackup.bronze daily > 20131207 as pool1/backup/mysqlbackup.bronze@20131207, 6.82M > > For "no new data" to be printed on the third line, written would have had > to be 0. > The fact that (used/lused/compressratio) is the same for both the first > and second line, at the end of the backup, there seemed to be no different > data there than before... yet a while later, the archive/snapshot runs, and > there's 6.82 MB new data. > > I'm just wondering if this behavior is odd, or if this is some kind of > cache issue. > > I'm using full disks, and an LSI HBA, so there's no oddness related to > using ZFS with underlying RAID controller cards. > > I can send this to the FreeBSD filesystem list, but I figured I would try > here first. > > Thanks in advance for any help you might be able to provide... > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 9 18:58:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2BA78EFD for ; Mon, 9 Dec 2013 18:58:48 +0000 (UTC) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id D299B15E7 for ; Mon, 9 Dec 2013 18:58:47 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1Vq5n3-0006i7-4v; Mon, 09 Dec 2013 13:42:45 -0500 Message-ID: <52A60F25.8040800@cse.yorku.ca> Date: Mon, 09 Dec 2013 13:42:45 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Matthew Ahrens Subject: Re: question about zfs written property References: <52A5F80D.8020206@cse.yorku.ca> In-Reply-To: X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 12/09/2013 01:24 PM, Matthew Ahrens wrote: > Jason, I'm cc'ing the freebsd mailing list as well. In general that > is a better forum for questions about how to use ZFS. > Thanks.. > > On Mon, Dec 9, 2013 at 9:04 AM, Jason Keltz > wrote: > > Hi.. > > I saw your names on the feature addition in illumos for the > "written" property for ZFS: > > https://www.illumos.org/issues/1645 > > I had a question and was hoping you might have a moment to answer. > > I'm rsyncing data from a Linux-based system to a ZFS backup and > archive server. It's running FreeBSD, but it's the same ZFS code > base as illumos. I'm seeing (what I think are) some weird numbers > looking at ZFS written property ... > > For example: > > Sat Dec 7 01:05:00 EST 2013 sync start > rsync://backup@forest-mrpriv/home9 /local/backup/home9 > (189G/264G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync finish > rsync://backup@forest-mrpriv/home9 /local/backup/home9 > (190G/265G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync elapsed 00h:28m:20s > rsync://backup@forest-mrpriv/home9 /local/backup/home9, 514M > unarchived > Sat Dec 7 06:32:58 EST 2013 archive create home9 daily 20131207 > as pool1/backup/home9@20131207, 518M > > In the third line, where you see "514M unarchived", I write out > the property of "written" after the rsync completes. However, > when the archive (just a snapshot) runs (hours later), there's 4 > MB more data!? Nothing touches the data after the rsync completes. > Both lines are probing the same property on the same dataset. > How can they get a different result? > > > If you are getting the "written" property just after the rsync > completes, it's possible that there is still some data "in flight" > inside ZFS. If you run "sync", that should flush out all the dirty > data and update the space accounting. Unfortunately this is only > documented in the description of the "used" property, we should add > similar qualifiers to "available", "referenced", "written", > "logicalreferenced", etc.: > > The [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Dec 2013 18:58:48 -0000 On 12/09/2013 01:24 PM, Matthew Ahrens wrote: > Jason, I'm cc'ing the freebsd mailing list as well. In general that > is a better forum for questions about how to use ZFS. > Thanks.. > > On Mon, Dec 9, 2013 at 9:04 AM, Jason Keltz > wrote: > > Hi.. > > I saw your names on the feature addition in illumos for the > "written" property for ZFS: > > https://www.illumos.org/issues/1645 > > I had a question and was hoping you might have a moment to answer. > > I'm rsyncing data from a Linux-based system to a ZFS backup and > archive server. It's running FreeBSD, but it's the same ZFS code > base as illumos. I'm seeing (what I think are) some weird numbers > looking at ZFS written property ... > > For example: > > Sat Dec 7 01:05:00 EST 2013 sync start > rsync://backup@forest-mrpriv/home9 /local/backup/home9 > (189G/264G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync finish > rsync://backup@forest-mrpriv/home9 /local/backup/home9 > (190G/265G/1.41x) > Sat Dec 7 01:33:20 EST 2013 sync elapsed 00h:28m:20s > rsync://backup@forest-mrpriv/home9 /local/backup/home9, 514M > unarchived > Sat Dec 7 06:32:58 EST 2013 archive create home9 daily 20131207 > as pool1/backup/home9@20131207, 518M > > In the third line, where you see "514M unarchived", I write out > the property of "written" after the rsync completes. However, > when the archive (just a snapshot) runs (hours later), there's 4 > MB more data!? Nothing touches the data after the rsync completes. > Both lines are probing the same property on the same dataset. > How can they get a different result? > > > If you are getting the "written" property just after the rsync > completes, it's possible that there is still some data "in flight" > inside ZFS. If you run "sync", that should flush out all the dirty > data and update the space accounting. Unfortunately this is only > documented in the description of the "used" property, we should add > similar qualifiers to "available", "referenced", "written", > "logicalreferenced", etc.: > > The amount of space used, available, or referenced does > > not take into account pending changes. Pending changes > > are generally accounted for within a few seconds. Com- > > mitting a change to a disk using fsync(3c) or O_SYNC > > does not necessarily guarantee that the space usage > > information is updated immediately. > > Yes. I repeated the test. It takes an additional 3 seconds after my 6 MB rsync completes for "written" to produce the correct result. The result is repeatable. It doesn't appear that running "sync" actually reduces the delay. A sleep() works though :) (I know I could sure use one!) Jason. From owner-freebsd-fs@FreeBSD.ORG Tue Dec 10 10:22:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 33850926; Tue, 10 Dec 2013 10:22:50 +0000 (UTC) Received: from smtp.peterschmitt.fr (gentiane.peterschmitt.fr [213.239.219.91]) by mx1.freebsd.org (Postfix) with ESMTP id E65411B7D; Tue, 10 Dec 2013 10:22:49 +0000 (UTC) Received: from [192.168.0.93] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 5E3B963C7A; Tue, 10 Dec 2013 11:23:05 +0100 (CET) Message-ID: <52A6EB67.3000103@peterschmitt.fr> Date: Tue, 10 Dec 2013 11:22:31 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: freebsd-stable stable , freebsd-fs@freebsd.org Subject: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="xODOl7IUCReWR59NBka2HGPvfHFjrMEGp" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Dec 2013 10:22:50 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --xODOl7IUCReWR59NBka2HGPvfHFjrMEGp Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other one is waiting for getting a vKVM=85 --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --xODOl7IUCReWR59NBka2HGPvfHFjrMEGp Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSputrAAoJEFr01BkajbiBRAwQAIdz6WrHSTEiPiM50S2JQZBG geBrvrKffj4wuuYYPG87FkJBV1a1uPt0KJsdIWlK/rokaOtUmRjm9784xTgvm7il ceoxdXioQGDtHNIl3XWQ5ZuszOqV+wRCH9rbwmCE9Qe42UWdjFwweoluqI2vDDiQ HVO11rt3lHU4LTrWcxhBQ3ttK0ic/OjVHYDFTMr6okGo5werJAcWwWb9//khssnV lq43Cp+0NkjpiJ2neUbZiZod2NzBjhS1Wk9lOjTN2POcunN/XPM5OxzYBWHvyGih 0gdML3tquygQxHSqla+LfvhxJZgSmvrSnA2V3iQ8gfrsafibtBSXrP/P9UeLam9e 9qlsrLoc1PvHLNxgN//pxLSOd9fBym0LeqhSJcBT0mRXpJhXg/ucg4gRALwNRq+h xTFz+6JIb2UTU9qBntfcoJ3MNQ/AySzzyVN/o2XNHeGNA3c/LCcQqn2rwovUKLjr T+h+jv+6zD/ufnGuXt/6MVc7FbbC1fiaOh4B3/lRzdCxYIKoBE4foCU4s8VxmsC0 xjSLQ1G7d9P2bYoaqRYGh/XoGxgAesrvtMOHSbajFMPPIlLY/ux+c+DKuqjk6GnA uMdbwrEX/ZRiOslMxGpFziq3WoDJL/Isyb4GxO6a5b0RtkJarbfLu049pZ/TFrSU Xbuj/eoJZF11iD3flEWc =4CbF -----END PGP SIGNATURE----- --xODOl7IUCReWR59NBka2HGPvfHFjrMEGp-- From owner-freebsd-fs@FreeBSD.ORG Tue Dec 10 21:16:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 35AE9937 for ; Tue, 10 Dec 2013 21:16:14 +0000 (UTC) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 09BD91144 for ; Tue, 10 Dec 2013 21:16:13 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1VqUez-0005nb-LK; Tue, 10 Dec 2013 16:16:05 -0500 Message-ID: <52A78496.3030803@cse.yorku.ca> Date: Tue, 10 Dec 2013 16:16:06 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: FreeBSD Filesystems Subject: mount ZFS snapshot on Linux system Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: I'm running FreeBSD 9.2 with various ZFS datasets. I export a dataset to a Linux system (RHEL64), and mount it. It works fine... When I try to access the ZFS snapshot directory on the Linux NFS client, things go weird. [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Dec 2013 21:16:14 -0000 I'm running FreeBSD 9.2 with various ZFS datasets. I export a dataset to a Linux system (RHEL64), and mount it. It works fine... When I try to access the ZFS snapshot directory on the Linux NFS client, things go weird. With NFSv4: [jas@archive /]# cd /mnt/.zfs/snapshot [jas@archive snapshot]# ls 20131203 20131205 20131206 20131207 20131208 20131209 20131210 [jas@archive snapshot]# cd 20131210 20131210: Not a directory. huh? [jas@archive snapshot]# ls -al total 77 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 [jas@archive snapshot]# stat * [jas@archive snapshot]# ls -al total 292 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 But it gets even more fun.. # ls -ali total 205 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 This is not a user id mapping issue because all the files in /mnt have the proper owner/groups, and I can access them there fine. I also tried explicitly exporting .zfs/snapshot. The result isn't any different. If I use nfs v3 it "works", but I'm seeing a whole lot of errors like these in syslog: Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for /local/backup/home9/.zfs/snapshot/20131203: Invalid argument Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for /local/backup/home9/.zfs/snapshot/20131209: Invalid argument Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for /local/backup/home9/.zfs/snapshot/20131210: Invalid argument Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for /local/backup/home9/.zfs/snapshot/20131207: Invalid argument It's not clear to me why this doesn't just "work". Can anyone provide any advice on debugging this? Thanks, Jason. -- Jason Keltz Manager of Development Department of Electrical Engineering and Computer Science York University, Toronto, Canada Tel: 416-736-2100 x. 33570 Fax: 416-736-5872 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 00:21:36 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 742B2C6E for ; Wed, 11 Dec 2013 00:21:36 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 37C5A1EBD for ; Wed, 11 Dec 2013 00:21:35 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: X-IronPort-AV: E=Sophos;i="4.93,867,1378872000"; d="scan'208";a="78477519" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 10 Dec 2013 19:21:34 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id EC1F1B3F7B; Tue, 10 Dec 2013 19:21:34 -0500 (EST) Date: Tue, 10 Dec 2013 19:21:34 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <1094466847.28925085.1386721294944.JavaMail.root@uoguelph.ca> In-Reply-To: <52A78496.3030803@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 00:21:36 -0000 Jason Keltz wrote: > I'm running FreeBSD 9.2 with various ZFS datasets. > I export a dataset to a Linux system (RHEL64), and mount it. It > works > fine... > When I try to access the ZFS snapshot directory on the Linux NFS > client, > things go weird. > > With NFSv4: > > [jas@archive /]# cd /mnt/.zfs/snapshot > [jas@archive snapshot]# ls > 20131203 20131205 20131206 20131207 20131208 20131209 20131210 > [jas@archive snapshot]# cd 20131210 > 20131210: Not a directory. > > huh? > > [jas@archive snapshot]# ls -al > total 77 > dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > [jas@archive snapshot]# stat * > [jas@archive snapshot]# ls -al > total 292 > dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > > But it gets even more fun.. > > # ls -ali > total 205 > 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > > This is not a user id mapping issue because all the files in /mnt > have > the proper owner/groups, and I can access them there fine. > > I also tried explicitly exporting .zfs/snapshot. The result isn't > any > different. > > If I use nfs v3 it "works", but I'm seeing a whole lot of errors like > these in syslog: > > Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > > It's not clear to me why this doesn't just "work". > > Can anyone provide any advice on debugging this? > As I think you already know, I know nothing about ZFS and never use it. Having said that, I suspect that there are filenos (i-node #s) that are the same in the snapshot as in the parent file system tree. The basic assumptions are: - within a file system, all i-node# are unique (represent one file object only) and all file objects have the same fsid - when the fsid changes, that indicates a file system boundary and fileno (i-node#s) can be reused in the subtree with a different fsid For NFSv3, the server should export single volumes only (all objects have the same fsid and the filenos are unique). This is indicated to the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and friends. For NFSv4, the server does export multiple volumes and the boundary is indicated by a change in fsid value. I suspect ZFS snaphots don't obey the above in some way, but that is just a hunch. Now, how to narrow this down... - Do the above tests (both NFSv4 and NFSv3) and capture the packets, then look at them in wireshark. In particular, look at the fileid numbers and fsid values for the various directories under .zfs. - Try mounting the individual snapshot directory, like .zfs/snapshot/20131209 and see if that works (for both NFSv3 and NFSv4). - Try doing the mounts with a FreeBSD client and see if you get the same behaviour? rick > Thanks, > > Jason. > > -- > Jason Keltz > Manager of Development > Department of Electrical Engineering and Computer Science > York University, Toronto, Canada > Tel: 416-736-2100 x. 33570 > Fax: 416-736-5872 > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 01:07:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AC02B998; Wed, 11 Dec 2013 01:07:33 +0000 (UTC) Received: from mail-ve0-x22c.google.com (mail-ve0-x22c.google.com [IPv6:2607:f8b0:400c:c01::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 56AD6119D; Wed, 11 Dec 2013 01:07:33 +0000 (UTC) Received: by mail-ve0-f172.google.com with SMTP id jw12so5537801veb.3 for ; Tue, 10 Dec 2013 17:07:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=hkmoUAMg8+hpGZ6BZkjQWZstfAHVctJpwfAYGugHAuY=; b=PP1qDFXnE0HKupB0AgDGjVhyi7OrWbmhecIwgaKVS4Y4tPE9niczCsRYiaY9PEUtkK C2JlLcF5fEj7mbhJMdLgghNGsJ4zM2fJK3NTNjeWLz4T+uhXw4UtGHZx/k4pbhtNUbPS D4QKYt0VW7aWyqxhjmUNxjVIlZ4g4igmf7eJuHJmFg7ECgZkZmHvnTH2FSudFOAcCfLh O8VuYyTGKoYW2eLWdClKZF8mjXgkxv+3xL1E49TyRqZNvUwUXM1OkMtsLo2fq7+AwjeP SjlakXxzAyi1W0Bs272g7JR9WwFmzj5H8yGQHmdpBnCKo0mP6CIvVJdJFvnwVDeIx2mI fkaw== MIME-Version: 1.0 X-Received: by 10.58.23.33 with SMTP id j1mr1519189vef.27.1386724052529; Tue, 10 Dec 2013 17:07:32 -0800 (PST) Sender: artemb@gmail.com Received: by 10.221.9.2 with HTTP; Tue, 10 Dec 2013 17:07:32 -0800 (PST) In-Reply-To: <52A6EB67.3000103@peterschmitt.fr> References: <52A6EB67.3000103@peterschmitt.fr> Date: Tue, 10 Dec 2013 17:07:32 -0800 X-Google-Sender-Auth: 1GvI1HgVZ4P6rKo-mnrjztHwNgk Message-ID: Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure From: Artem Belevich To: Florent Peterschmitt Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 01:07:33 -0000 On Tue, Dec 10, 2013 at 2:22 AM, Florent Peterschmitt wrote: > Hi, > > Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was > upgraded from 9.2-RELEASE? > > I have two servers, with very different hardware (on is with soft raid > and the other have not) and after a zpool upgrade, no way to get the > server booting. It may help if you could provide details on how exactly "no way to get the system booting" manifests itself. Serial console capture of a failed boot would be a good start. Did it get to the loader? Did the loader manage to read loader.conf and load kernel and modules? Did the kernel start? Did the kernel manage to mount root FS? --Artem From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 04:08:32 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 05072B45 for ; Wed, 11 Dec 2013 04:08:32 +0000 (UTC) Received: from nm17-vm1.bullet.mail.bf1.yahoo.com (nm17-vm1.bullet.mail.bf1.yahoo.com [98.139.213.55]) by mx1.freebsd.org (Postfix) with SMTP id B2D8C102C for ; Wed, 11 Dec 2013 04:08:31 +0000 (UTC) Received: from [98.139.212.153] by nm17.bullet.mail.bf1.yahoo.com with NNFMP; 11 Dec 2013 04:08:25 -0000 Received: from [68.142.230.77] by tm10.bullet.mail.bf1.yahoo.com with NNFMP; 11 Dec 2013 04:08:25 -0000 Received: from [127.0.0.1] by smtp234.mail.bf1.yahoo.com with NNFMP; 11 Dec 2013 04:08:25 -0000 X-Yahoo-Newman-Id: 140552.67288.bm@smtp234.mail.bf1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: xUVb2AYVM1nv3UxtblXQP2FtmBG7qx1IJ1XO7rgOSlD102Z _UaliXcFwhQmqmLxCLuHunEYsZ1L892ee.jGNZZT6faJRrsGu19sVYFLj7Jx Z3qyZDg3.Jk6B7dR5w4l52.zwHpgowl._Ozv_ZU8DsyZGTOjY8jARDKnFJAr XrH59iF6YWthEA1Lv.rhViP3ezWHt0W9MDU9pP20aVdHUkXTty0ZVzO.faPw pGQNQ95aIzq7d2qqj1obUPWdAAYPBj7eObi1Numm37RBWOgAWYVzexACuZOV qpv45D2MXjNedOUjcP4lokZ3KhF.h9D8V2Xwvh2CKhu_gaKlu3QMmeQN2s3h R7r2qz8eqcYOfZNhbW1ga7QoBneSN7cuyeBS6jVQoGzBQca.sa_qSW8pCng4 rjXjzsHfsVKWJ.AO_PHtUhIS85vlDcx5C97k7Lt..rKND2RRKeh_3kDSDvzj W4.SCeIJuFeT7eLxXtsLUtbRNwrXjadeLSF5QVJCf2ZeRemWkQaelTeD4OrQ cepMCZxEyY3ykYTqj4XcRU4qXRLRC4JZwTAeY.ojAubNOFHsz390WJE_WMiM q1fmAhjVhvMd3ehvuT.X1yzTVaMg39_9bYV6vUMgpGo3mtk4z_QgN_47sluq EZ2YmjJbQ61IEk8lpa2zgOQyBe3tdQpLKOI0GnZjFdsILH01hqJSdBDxOZ22 ywfrfpnVe5mCkwXs.urZ272UopiX6EypTnR5uZKAr8da1ni2iCjUdElQ.aau 1nIfl3MAvHZJ8c.M7ktQKUIy7u3ZydeuQUMN3dfwxwWrf2SIbeZY6HixVY.q mWLaUhO4TO3wEqJUmpIan1Ijj36gUTsZfsE9P96hR3Smhx4eC7mYlObALL68 xau4fbUXIyQyEJg-- X-Yahoo-SMTP: hdvk3SuswBDjqWuLIhjJ7cQT_83YtZNiMmKQOSuhvZGxXQ-- X-Rocket-Received: from [192.168.1.105] (jas@99.238.41.227 with plain [63.250.193.228]) by smtp234.mail.bf1.yahoo.com with SMTP; 11 Dec 2013 04:08:25 +0000 UTC Message-ID: <52A7E53D.8000002@cse.yorku.ca> Date: Tue, 10 Dec 2013 23:08:29 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Thunderbird/24.1.1 MIME-Version: 1.0 To: Rick Macklem Subject: Re: mount ZFS snapshot on Linux system References: <1094466847.28925085.1386721294944.JavaMail.root@uoguelph.ca> In-Reply-To: <1094466847.28925085.1386721294944.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 04:08:32 -0000 On 10/12/2013 7:21 PM, Rick Macklem wrote: > Jason Keltz wrote: >> I'm running FreeBSD 9.2 with various ZFS datasets. >> I export a dataset to a Linux system (RHEL64), and mount it. It >> works >> fine... >> When I try to access the ZFS snapshot directory on the Linux NFS >> client, >> things go weird. >> >> With NFSv4: >> >> [jas@archive /]# cd /mnt/.zfs/snapshot >> [jas@archive snapshot]# ls >> 20131203 20131205 20131206 20131207 20131208 20131209 20131210 >> [jas@archive snapshot]# cd 20131210 >> 20131210: Not a directory. >> >> huh? >> >> [jas@archive snapshot]# ls -al >> total 77 >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 >> [jas@archive snapshot]# stat * >> [jas@archive snapshot]# ls -al >> total 292 >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 >> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 >> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 >> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 >> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 >> >> But it gets even more fun.. >> >> # ls -ali >> total 205 >> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 >> >> This is not a user id mapping issue because all the files in /mnt >> have >> the proper owner/groups, and I can access them there fine. >> >> I also tried explicitly exporting .zfs/snapshot. The result isn't >> any >> different. >> >> If I use nfs v3 it "works", but I'm seeing a whole lot of errors like >> these in syslog: >> >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument >> >> It's not clear to me why this doesn't just "work". >> >> Can anyone provide any advice on debugging this? >> > As I think you already know, I know nothing about ZFS and never > use it. Yup! :) > Having said that, I suspect that there are filenos (i-node #s) > that are the same in the snapshot as in the parent file system tree. > > The basic assumptions are: > - within a file system, all i-node# are unique (represent one file > object only) and all file objects have the same fsid > - when the fsid changes, that indicates a file system boundary and > fileno (i-node#s) can be reused in the subtree with a different > fsid > > For NFSv3, the server should export single volumes only (all objects > have the same fsid and the filenos are unique). This is indicated to > the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and friends. > > For NFSv4, the server does export multiple volumes and the boundary > is indicated by a change in fsid value. > > I suspect ZFS snaphots don't obey the above in some way, but that is > just a hunch. > > Now, how to narrow this down... > - Do the above tests (both NFSv4 and NFSv3) and capture the packets, > then look at them in wireshark. In particular, look at the fileid numbers > and fsid values for the various directories under .zfs. I gave this a shot, but I haven't used wireshark to capture NFS traffic before, so if I need to provide additional details, let me know.. NFSv4: For /mnt/.zfs/snapshot/20131203: fileid=4 fsid4.major=1446349656 fsid4.minor=222 For /mnt/.zfs/snapshot/20131205: fileid=4 fsid4.major=1845998066 fsid4.minor=222 For /mnt/jas: fileid=144 fsid4.major=597946950 fsid4.minor=222 For /mnt/jas1: fileid=338 fsid4.major=597946950 fsid4.minor=222 So fsid is the same for all the different "data" directories, which is what I would expect given what you said. I guess each snapshot is seen as a unique filesystem... but then a repeating inode in different filesystems shouldn't be a problem... NFSv3: For /mnt/.zfs/snapshot/20131203: fileid=4 fsid=0x0000000056358b58 For /mnt/.zfs/snapshot/20131205: fileid=4 fsid=0x000000006e07b1f2 For /mnt/jas fileid=144 fsid=0x0000000023a3f246 For /mnt/jas1: fileid=338 fsid=0x0000000023a3f246 Here, it seems it's the same, even though it's NFSv3... hmm. > - Try mounting the individual snapshot directory, like > .zfs/snapshot/20131209 and see if that works (for both NFSv3 and NFSv4). Hmm .. I tried this: /local/backup/home9/.zfs/snapshot/20131203 -ro archive-mrpriv.cs.yorku.ca V4: / ... but syslog reports: Dec 10 22:28:22 jungle mountd[85405]: can't export /local/backup/home9/.zfs/snapshot/20131203 ... and of course I can't mount from either v3/v4. On the other hand, I kept it as: /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca V4:/ ... and was able to NFSv4 mount /local/backup/home9/.zfs/snapshot/20131203, and this does indeed work. > - Try doing the mounts with a FreeBSD client and see if you get the same > behaviour? I found this: http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ .. implies it will work from FreeBSD/Nexenta, just not Linux. Found this as well: https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/lKyfYsjPMNM Jason. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 07:53:00 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4CBB6361 for ; Wed, 11 Dec 2013 07:53:00 +0000 (UTC) Received: from graal.it-profi.org.ua (graal.shurik.kiev.ua [193.239.74.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 06B4114D3 for ; Wed, 11 Dec 2013 07:52:59 +0000 (UTC) Received: from [217.76.201.82] (helo=thinkpad.it-profi.org.ua) by graal.it-profi.org.ua with esmtpa (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1VqeFU-000PIA-6G for freebsd-fs@freebsd.org; Wed, 11 Dec 2013 09:30:24 +0200 Message-ID: <52A8148C.1050407@shurik.kiev.ua> Date: Wed, 11 Dec 2013 09:30:20 +0200 From: Alexandr User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> In-Reply-To: <52A6EB67.3000103@peterschmitt.fr> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 217.76.201.82 X-SA-Exim-Mail-From: shuriku@shurik.kiev.ua X-SA-Exim-Scanned: No (on graal.it-profi.org.ua); SAEximRunCond expanded to false X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 07:53:00 -0000 10.12.2013 12:22, Florent Peterschmitt пишет: > Hi, > > Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was > upgraded from 9.2-RELEASE? > > I have two servers, with very different hardware (on is with soft raid > and the other have not) and after a zpool upgrade, no way to get the > server booting. > > Do I miss something when upgrading? > > I cannot get the error message for the moment. I reinstalled the raid > server under Linux and the other one is waiting for getting a vKVM… > Did you do gpart bootcode after zpool upgrade? From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 08:27:04 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 63717A2D; Wed, 11 Dec 2013 08:27:04 +0000 (UTC) Received: from smtp.peterschmitt.fr (gentiane.peterschmitt.fr [213.239.219.91]) by mx1.freebsd.org (Postfix) with ESMTP id 8BC95176A; Wed, 11 Dec 2013 08:27:03 +0000 (UTC) Received: from [192.168.0.93] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id D492361F72; Wed, 11 Dec 2013 09:27:20 +0100 (CET) Message-ID: <52A821C3.2050800@peterschmitt.fr> Date: Wed, 11 Dec 2013 09:26:43 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: Artem Belevich Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="nulfMlu46DWBGLhPQ5OrEp1A8pi674fVt" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 08:27:04 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --nulfMlu46DWBGLhPQ5OrEp1A8pi674fVt Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Le 11/12/2013 02:07, Artem Belevich a =E9crit : > On Tue, Dec 10, 2013 at 2:22 AM, Florent Peterschmitt > wrote: >> Hi, >> >> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was >> upgraded from 9.2-RELEASE? >> >> I have two servers, with very different hardware (on is with soft raid= >> and the other have not) and after a zpool upgrade, no way to get the >> server booting. >=20 > It may help if you could provide details on how exactly "no way to get > the system booting" manifests itself. > Serial console capture of a failed boot would be a good start. >=20 > Did it get to the loader? > Did the loader manage to read loader.conf and load kernel and modules? > Did the kernel start? > Did the kernel manage to mount root FS? >=20 > --Artem >=20 No, there is no module loading, no more than the kernel load. I had an error like "no fs recognized", then it is stuck with the interactive minimal boot loader (which name is "loader", correct?). The only thing I can do is to reboot the computer. In rescue (9.2-RELEASE) I can import the zpool without problem, then use the gpart inside the 10.0-BETA4 (on the remaining problematic server it is a BETA4) and rewrite the bootcode. No changes. This week-end I may be able to get a vKVM. It's a low-cost server and there are no other way to repair the system than a rescue. Also, can it be a problem from the BIOS? To old to boot from a GPT "formated disk" (sorry I don't know much about disks and BIOS)? It would be possible only if some changes were brought into the bootcode. But maybe the problem isn't here, it can be from the zpool itself. Also like a said on -stable, I *cannot* reproduce the problem in a VM: * Installing 9.2-RELEASE from scratch like I made for the server. * Binary upgrade to 10.0-BETA4 (install kernel, reboot, install userland, rebuild ports, install last parts of userland) * zpool upgrade tank * reboot * It's all fine. --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --nulfMlu46DWBGLhPQ5OrEp1A8pi674fVt Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqCHHAAoJEFr01BkajbiBHFcP/0V0y5aBJVfzboNu4pCVoNLO Tozj5MrK3Re76ayA9dsmA8rffxSGHVjs69KsSMmsZUm+WHpFTzsl+gX1NfRvI1s6 YOjoQ4jcOChbyfDmKbxO7rMJ94M6etpMttwHI0QrNesXzvqH02/qmMOtyG+84xxF LjTbbjjXNfDGCNHUpZwYrblhGvSvKdKt8T3VF237BkKCbp9yMtJu5IhBUAOnNqDn tpNgVUmXyx49pNHXIeNeE0gNUpn6mtfYiXTu9vuvfPPf05Zxrdutw8qjM6hTpiAG fEBb7kYTFBpqbBwTQ8Upu4DWk1YgcQPOOGW1ApGJ+fnCMJ/jeqXZ3VQ45mE7cD/y h6oE5773jeNHsivmiNw5EJfqHOcY29USPOY8g1VEQmdK6alUiEtc7cPFzgok5rfI wPpVA6R38MVWErkOAU4goHU2w14+tBSNMqoyvOW/pQIl6hfQwn2MEzZq8KNXgGsd 7FZTdqdZTRGLLDs34inlwSZr3lUrtE9LjUVV194dEHagCB2r7j386mEhwqFlb3fA XZgkDooJrgbgpHYXD7TY4Fl4NpIf/fk1Mv49LEUR40XUji4ZUXkVJCNlT74+G+WV oOFR3TtO0ZpB/I6MLAb5P1Z/FgiCeGoqhQnrTzBufAOETo23vpq8oUSUZ02z35Ue ZxnJv0+8RdrQIiMXO0eL =9Wo3 -----END PGP SIGNATURE----- --nulfMlu46DWBGLhPQ5OrEp1A8pi674fVt-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 08:36:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 91772DBB; Wed, 11 Dec 2013 08:36:05 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id 203B8187D; Wed, 11 Dec 2013 08:36:05 +0000 (UTC) Received: from [192.168.0.93] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 80BE762D4D; Wed, 11 Dec 2013 09:36:17 +0100 (CET) Message-ID: <52A823DF.6040604@peterschmitt.fr> Date: Wed, 11 Dec 2013 09:35:43 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: Artem Belevich Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A821C3.2050800@peterschmitt.fr> In-Reply-To: <52A821C3.2050800@peterschmitt.fr> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="QJjVIRsmoaTfXDbsaniT21MCbII5nGrWS" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 08:36:05 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --QJjVIRsmoaTfXDbsaniT21MCbII5nGrWS Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Le 11/12/2013 09:26, Florent Peterschmitt a =E9crit : > Le 11/12/2013 02:07, Artem Belevich a =E9crit : >> On Tue, Dec 10, 2013 at 2:22 AM, Florent Peterschmitt >> wrote: >>> Hi, >>> >>> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was >>> upgraded from 9.2-RELEASE? >>> >>> I have two servers, with very different hardware (on is with soft rai= d >>> and the other have not) and after a zpool upgrade, no way to get the >>> server booting. >> >> It may help if you could provide details on how exactly "no way to get= >> the system booting" manifests itself. >> Serial console capture of a failed boot would be a good start. >> >> Did it get to the loader? >> Did the loader manage to read loader.conf and load kernel and modules?= >> Did the kernel start? >> Did the kernel manage to mount root FS? >> >> --Artem >> >=20 > No, there is no module loading, no more than the kernel load. I had an > error like "no fs recognized", then it is stuck with the interactive > minimal boot loader (which name is "loader", correct?). The only thing = I > can do is to reboot the computer. >=20 > In rescue (9.2-RELEASE) I can import the zpool without problem, then us= e > the gpart inside the 10.0-BETA4 (on the remaining problematic server it= > is a BETA4) and rewrite the bootcode. No changes. >=20 > This week-end I may be able to get a vKVM. It's a low-cost server and > there are no other way to repair the system than a rescue. >=20 > Also, can it be a problem from the BIOS? To old to boot from a GPT > "formated disk" (sorry I don't know much about disks and BIOS)? It woul= d > be possible only if some changes were brought into the bootcode. >=20 > But maybe the problem isn't here, it can be from the zpool itself. >=20 > Also like a said on -stable, I *cannot* reproduce the problem in a VM: >=20 > * Installing 9.2-RELEASE from scratch like I made for the server. > * Binary upgrade to 10.0-BETA4 (install kernel, reboot, install > userland, rebuild ports, install last parts of userland) > * zpool upgrade tank Forgotten -just in the mail- the gpart bootcode step, which I made. > * reboot > * It's all fine. >=20 --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --QJjVIRsmoaTfXDbsaniT21MCbII5nGrWS Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqCPfAAoJEFr01BkajbiBcyYQAJ4gyG1C7xMXc5SPVlscdr/F n5C5Jl71RB9fggKwQYbbtkaKuLLLnfemGpsE9BIIARKsVOfKkbQmP0eNd6Py7oGe Cr1J2IIadRyLwyzyTQCF4cZnIQz5V5wVzvplnKg+MSi26BGsDz6gDUvT1J4wHLnd NylpTtR+hpfMv6n9k29Mc3u6FSGqOzZyEg/z1Nqu+P62uTmuZ7fIwFfgGui/HC23 qd/Lrubhb12kbcZIFqXPX4AoT0K00l8LecG25omHskAN3it9dnDlslmD4IP97/y6 oSPX+C4IAfHW3ojhBx2g3MUkfVDSiS+HnqdamiqVHgWaPlepBoT4r1V2MdWTotGb GtdnGC2Pit0ltzLeCUKj/Co0DkRMaoESpwvE5qO2ojCl1oL4eBDEeGmGtKdKGbT0 bAeeT6JosVBzDp5fqsSyAwySu4z77gWFTBhDoHVjZyxzNIMHOj0qPJmzL2KecEl6 cInfowkl5ClLLpWqXNnTBPJgZdwNT3fcCKWEzH7Cyw2gABq9XrNGlAJxE3Z5sQIh dhX+X2mCDRHkQHk490+y+8LaxMLyPcxsR2WBrr5Yc/FTq8ogStBptBOgX4YFnLA1 xWeWqCxVr56pxvBX51tEgPdC1PslXlYswgAFG48aoM0C9XwVX/hbJuysG1jESI7A 3CUXrR9is0nYZRv4Tr82 =wQWy -----END PGP SIGNATURE----- --QJjVIRsmoaTfXDbsaniT21MCbII5nGrWS-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 08:51:41 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C04FF5F8; Wed, 11 Dec 2013 08:51:41 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AD1A91D1F; Wed, 11 Dec 2013 08:51:40 +0000 (UTC) Received: from r2d2 ([82.69.179.241]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50007005797.msg; Wed, 11 Dec 2013 08:51:31 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Wed, 11 Dec 2013 08:51:31 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.179.241 X-Return-Path: prvs=1057e8689f=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk Message-ID: <5F8EE883905C41189A8D1EC7C387086A@multiplay.co.uk> From: "Steven Hartland" To: "Florent Peterschmitt" , "Artem Belevich" References: <52A6EB67.3000103@peterschmitt.fr> <52A821C3.2050800@peterschmitt.fr> Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure Date: Wed, 11 Dec 2013 08:51:32 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 08:51:41 -0000 ----- Original Message ----- From: "Florent Peterschmitt" Before the kernel main load did you see it load the zfs module? If not try a kldload zfs from the loader prompt. if this fixes it you could be missing zfs_load="YES" from you /boot/loader.conf and / or you installed the wrong boot loader. Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 21:11:15 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E8774D14; Wed, 11 Dec 2013 21:11:15 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id 63BF71830; Wed, 11 Dec 2013 21:11:15 +0000 (UTC) Received: from [10.3.3.27] (89-159-92-164.rev.dartybox.com [89.159.92.164]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 6776361F8A; Wed, 11 Dec 2013 22:11:33 +0100 (CET) Message-ID: <52A8E301.7020603@peterschmitt.fr> Date: Wed, 11 Dec 2013 22:11:13 +0000 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Steven Hartland , Artem Belevich Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A821C3.2050800@peterschmitt.fr> <5F8EE883905C41189A8D1EC7C387086A@multiplay.co.uk> In-Reply-To: <5F8EE883905C41189A8D1EC7C387086A@multiplay.co.uk> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="4qPKWgOB9QjjVRW2ftc1GhcTuKEA1A3ro" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 21:11:16 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --4qPKWgOB9QjjVRW2ftc1GhcTuKEA1A3ro Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Le 11/12/2013 08:51, Steven Hartland a =E9crit : > ----- Original Message ----- From: "Florent Peterschmitt" > > Before the kernel main load did you see it load the zfs module? >=20 > If not try a kldload zfs from the loader prompt. >=20 > if this fixes it you could be missing zfs_load=3D"YES" from you > /boot/loader.conf and / or you installed the wrong boot loader. Nope, no module loading at all. But I didn't forgot zfs_load=3D"YES" in loader.conf, I'm sure about that :) Well, it seems like I will not be able to get a vKVM for the server=85 so= perhaps it may be an error from me or something else. I'm trying many combos in VMs with no (bad) results. If something changes I'll make another discussion, I doubt we can do something more with these poor informations. --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --4qPKWgOB9QjjVRW2ftc1GhcTuKEA1A3ro Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqOMBAAoJEFr01BkajbiBV+cP/2uXLDRppFGX4B/yFifXmxPb j9/1sWXFUXVS3HzhxiWbpsNYFxY7glXz+lWVL53vNulqyc30ozyH+qMJaESIeh7c uc9+Z07Niey4S6+4in5PmS3SkpiURm8GD24lDWOU7XirSpq+1Ec4+H4RdtirvgeU 5bNqDv0b2FaDtqzEsoU6laf1g9/ytHS42J73PZJWFp5E8NCyBh3RlnZzuP5S48Sp p8JdjtQr8w1M8qY6IoWSYy6P2c1dZyE68MJxI90RkTwMSOnM68My9nuEpv/dzJa3 NbsqNQVH6LC9lIWtgvQFXW+RQyHEvBU6p2ysEMDH5a49K13f4o8FqS2j9x9s8os9 l+UGZvFIxDkBWkoQ55UFAafw4oi6qdGDAmU8WtI+k9zp6PF0j71iB4A4Ue3Ppu3j OxFer6iZn3ZkiFcEHQwwtCZykanwXN5VLn66DuSbbiomp6TXZFnf8yC1y374eAGn o2m8+QeWuV7MLY/gdWqLnoABNyHlEjgMqaLB6elZFblvIPBAo3DNebk0QUVWeJvN Ca/xr92iPRzlHf66hu5zdXHS/yri1JYQrji0KRZVvpS6PQCSV2j+trJZHCDdnZ4/ 3nuXu+V52RDQVG25S5la0vP81z4WwQF4CZLfQVIfhRMoGgOdN+BKQtSn91XgWh5U 8BKuJ3uP+C0u1ceMa9A5 =ePEr -----END PGP SIGNATURE----- --4qPKWgOB9QjjVRW2ftc1GhcTuKEA1A3ro-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 11 23:21:57 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 43428585 for ; Wed, 11 Dec 2013 23:21:57 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id C887D1335 for ; Wed, 11 Dec 2013 23:21:56 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: X-IronPort-AV: E=Sophos;i="4.93,874,1378872000"; d="scan'208";a="78796242" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 11 Dec 2013 18:21:55 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1306EB3F62; Wed, 11 Dec 2013 18:21:55 -0500 (EST) Date: Wed, 11 Dec 2013 18:21:55 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <116973401.29503791.1386804115064.JavaMail.root@uoguelph.ca> In-Reply-To: <52A7E53D.8000002@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Dec 2013 23:21:57 -0000 Jason Keltz wrote: > On 10/12/2013 7:21 PM, Rick Macklem wrote: > > Jason Keltz wrote: > >> I'm running FreeBSD 9.2 with various ZFS datasets. > >> I export a dataset to a Linux system (RHEL64), and mount it. It > >> works > >> fine... > >> When I try to access the ZFS snapshot directory on the Linux NFS > >> client, > >> things go weird. > >> > >> With NFSv4: > >> > >> [jas@archive /]# cd /mnt/.zfs/snapshot > >> [jas@archive snapshot]# ls > >> 20131203 20131205 20131206 20131207 20131208 20131209 > >> 20131210 > >> [jas@archive snapshot]# cd 20131210 > >> 20131210: Not a directory. > >> > >> huh? > >> > >> [jas@archive snapshot]# ls -al > >> total 77 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> [jas@archive snapshot]# stat * > >> [jas@archive snapshot]# ls -al > >> total 292 > >> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > >> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > >> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > >> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > >> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > >> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > >> > >> But it gets even more fun.. > >> > >> # ls -ali > >> total 205 > >> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >> > >> This is not a user id mapping issue because all the files in /mnt > >> have > >> the proper owner/groups, and I can access them there fine. > >> > >> I also tried explicitly exporting .zfs/snapshot. The result isn't > >> any > >> different. > >> > >> If I use nfs v3 it "works", but I'm seeing a whole lot of errors > >> like > >> these in syslog: > >> > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > >> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > >> > >> It's not clear to me why this doesn't just "work". > >> > >> Can anyone provide any advice on debugging this? > >> > > As I think you already know, I know nothing about ZFS and never > > use it. > Yup! :) > > Having said that, I suspect that there are filenos (i-node #s) > > that are the same in the snapshot as in the parent file system > > tree. > > > > The basic assumptions are: > > - within a file system, all i-node# are unique (represent one file > > object only) and all file objects have the same fsid > > - when the fsid changes, that indicates a file system boundary and > > fileno (i-node#s) can be reused in the subtree with a different > > fsid > > > > For NFSv3, the server should export single volumes only (all > > objects > > have the same fsid and the filenos are unique). This is indicated > > to > > the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and > > friends. > > > > For NFSv4, the server does export multiple volumes and the boundary > > is indicated by a change in fsid value. > > > > I suspect ZFS snaphots don't obey the above in some way, but that > > is > > just a hunch. > > > > Now, how to narrow this down... > > - Do the above tests (both NFSv4 and NFSv3) and capture the > > packets, > > then look at them in wireshark. In particular, look at the > > fileid numbers > > and fsid values for the various directories under .zfs. > > I gave this a shot, but I haven't used wireshark to capture NFS > traffic > before, so if I need to provide additional details, let me know.. > > NFSv4: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid4.major=1446349656 > fsid4.minor=222 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid4.major=1845998066 > fsid4.minor=222 > > For /mnt/jas: > fileid=144 > fsid4.major=597946950 > fsid4.minor=222 > > For /mnt/jas1: > fileid=338 > fsid4.major=597946950 > fsid4.minor=222 > > So fsid is the same for all the different "data" directories, which > is > what I would expect given what you said. I guess each snapshot is > seen > as a unique filesystem... but then a repeating inode in different > filesystems shouldn't be a problem... > Yes, it appears that each snapshot is represented as a different file system. As such, NFSv4 should work for these, but there is an additional property of the "root" of each of these (20131203, ...). When the directory .zfs/snapshot is read, the fileno for 20131203 should be different than the fileno returned by VOP_GETATTR()/stat() for "20131203". (The old "mounted-on" vs "root-of-mounted-fs" vnodes which you get for a "mount point".) For NFSv4, the server returns the fileno in the VOP_READDIR() dirent as a separate attribute called mounted_on_fileid vs the value returned by VOP_GETATTR() as the fileid attribute. If the value of these 2 attributes is the same, it is not a "mount point". So, maybe you could take another look at the packet capture in wireshark and see what the fileid and mounted_on_fileid attributes are? > NFSv3: > > For /mnt/.zfs/snapshot/20131203: > fileid=4 > fsid=0x0000000056358b58 > > For /mnt/.zfs/snapshot/20131205: > fileid=4 > fsid=0x000000006e07b1f2 > > For /mnt/jas > fileid=144 > fsid=0x0000000023a3f246 > > For /mnt/jas1: > fileid=338 > fsid=0x0000000023a3f246 > > Here, it seems it's the same, even though it's NFSv3... hmm. > > > > - Try mounting the individual snapshot directory, like > > .zfs/snapshot/20131209 and see if that works (for both NFSv3 > > and NFSv4). > > Hmm .. I tried this: > > /local/backup/home9/.zfs/snapshot/20131203 -ro > archive-mrpriv.cs.yorku.ca > V4: / > > ... but syslog reports: > > Dec 10 22:28:22 jungle mountd[85405]: can't export > /local/backup/home9/.zfs/snapshot/20131203 > mountd will do a VFS_CHECKEXP(), which seems to fail for these (which also explains the error messages). To be honest, with these failing, remote access should fail. Also, since NFSv3 exported volumes should not cross "mount points" (anywhere the fsid changes), all a mount above .zfs/snapshot/20131203 should get are a bunch of empty directories called 20131203,... For example, if in the UFS world with a separate file systems /sub1 and /sub1/sub2 with both exported: - an NFSv3 mount of /sub1 on /mnt would see an empty directory "sub2" when looking in /mnt. (Actually it isn't necessarily empty. It might have whatever is in the directory when /sub1/sub2 is not mounted.) This seems pretty obviously broken for ZFS, but I think it needs to be fixed in ZFS and I have no idea how to do that, since I don`t know if snapshots are real mount points, etc. > ... and of course I can't mount from either v3/v4. > > On the other hand, I kept it as: > > /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca > V4:/ > > ... and was able to NFSv4 mount > /local/backup/home9/.zfs/snapshot/20131203, and this does indeed > work. > Yes, although technically it should not work unless 20131203 is exported. However, it is probably the easiest work around until this is fixed someday. So, just to make sure I am clear on this... A NFSv4 mount of the snapshot works ok, even for a Linux client mount. > > - Try doing the mounts with a FreeBSD client and see if you get the > > same > > behaviour? > I found this: > http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ > .. implies it will work from FreeBSD/Nexenta, just not Linux. I suspect this might be the mounted_on_fileid vs fileid issue. (ie, The Linux client needs this to be done correctly, but the other clients figure it out.) One case that might break for FreeBSD would be to cd into a snapshot and then do a pwd with the debug.disablecwd sysctl set to 1. Hopefully the ZFS wizards are reading this, rick > Found this as well: > https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/lKyfYsjPMNM > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 12:21:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1A728342; Thu, 12 Dec 2013 12:21:43 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id CFC681B17; Thu, 12 Dec 2013 12:21:42 +0000 (UTC) Received: from [192.168.0.93] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 52395627F9; Thu, 12 Dec 2013 13:22:02 +0100 (CET) Message-ID: <52A9AA45.2000907@peterschmitt.fr> Date: Thu, 12 Dec 2013 13:21:25 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: Andriy Gapon Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> In-Reply-To: <52A99917.2050200@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="evACD1nNbmm8KxkpOBO2OsgWrWMQXIMJQ" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 12:21:43 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --evACD1nNbmm8KxkpOBO2OsgWrWMQXIMJQ Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Le 12/12/2013 12:08, Andriy Gapon a =E9crit : > on 10/12/2013 12:22 Florent Peterschmitt said the following: >> Hi, >> >> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was=20 >> upgraded from 9.2-RELEASE? >> >> I have two servers, with very different hardware (on is with soft raid= and >> the other have not) and after a zpool upgrade, no way to get the serve= r >> booting. >> >> Do I miss something when upgrading? >> >> I cannot get the error message for the moment. I reinstalled the raid = >> server under Linux and the other one is waiting for getting a vKVM=85 >> >=20 > Apologies if I missed it, but a few words about your pool configuration= and > the hardware that it uses would not hurt. >=20 Yes, sorry. You can find here the script I use to install: https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --evACD1nNbmm8KxkpOBO2OsgWrWMQXIMJQ Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqapFAAoJEFr01BkajbiBwuEP/3UA8GG5y9M7tpeIFXmXb405 2iA9titj8oOYNGBD9I/jsHt+y45WIH14MhnPYt/Ov3UN0TnjG0+NtkQuxcRDZ3o8 4U9lnooHNu4z5PB+RW7G24p1b+Nap2S7lq5Vvaqm+DTUIEJDNattK6joODuF2/hN +bQcxcsgDN0pOKEHHn5wRV7vM0e9xZFxLFTnmCHn8NpW/E83dJRUTxYrWbKbJu8R Wk/awtsKPD7puyuIcz6FzOCKq4gtzr339CXZ3VwaKWzHerR3mp99Oobvpimip8TJ sC96Z/zVA569AsXnUnEyPF33HuKs3Rmcq+bucJVav1yYyltImvme+TaC7pPZMwB1 ZXweJpjVTR6z40aeUU1cIRi388TqkQEqx7+wYd+XxD+4gOO/oTJ0FTMrXblvV8BV A7vyuSWQFBaHkFGnQXykwJ1zeJaFeEBBk0H02i0ZfSOTrUIjg+LJLN6mOjAxp0gQ dGIXh5CuCfFYj3r9pfiJCEiLGEV5ED0NjNtDDAy6GkI+SOhEyfDnZC8/9/LjBBnl Id8YKZrJLKOHPfUk3Ra5r8mXMaXO6uEy3gd+RxZ7TkgJvN79TmhViHtewmTGISoQ idw+6Twz5tH3OHz54oHsL05P7dYFKAfQIkdME6Vukbp6HEKwGLiPhL+BKNZnhY7i jk0N9nHeeqLiUtxvgR3r =L8aQ -----END PGP SIGNATURE----- --evACD1nNbmm8KxkpOBO2OsgWrWMQXIMJQ-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 12:30:11 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 39F576D0; Thu, 12 Dec 2013 12:30:11 +0000 (UTC) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 5685F1CA3; Thu, 12 Dec 2013 12:30:09 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA04678; Thu, 12 Dec 2013 14:29:57 +0200 (EET) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1Vr5Ov-000Ka0-JS; Thu, 12 Dec 2013 14:29:57 +0200 Message-ID: <52A9ABEF.8080509@FreeBSD.org> Date: Thu, 12 Dec 2013 14:28:31 +0200 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Florent Peterschmitt Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> In-Reply-To: <52A9AA45.2000907@peterschmitt.fr> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 12:30:11 -0000 on 12/12/2013 14:21 Florent Peterschmitt said the following: > Le 12/12/2013 12:08, Andriy Gapon a écrit : >> on 10/12/2013 12:22 Florent Peterschmitt said the following: >>> Hi, >>> >>> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was >>> upgraded from 9.2-RELEASE? >>> >>> I have two servers, with very different hardware (on is with soft raid >>> and the other have not) and after a zpool upgrade, no way to get the >>> server booting. >>> >>> Do I miss something when upgrading? >>> >>> I cannot get the error message for the moment. I reinstalled the raid >>> server under Linux and the other one is waiting for getting a vKVM… >>> >> >> Apologies if I missed it, but a few words about your pool configuration >> and the hardware that it uses would not hurt. >> > > Yes, sorry. You can find here the script I use to install: > > https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh > Is there a more readable way to describe the configuration and the _hardware_ than this script? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 12:35:58 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 86F6EA30; Thu, 12 Dec 2013 12:35:58 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id 167581D53; Thu, 12 Dec 2013 12:35:58 +0000 (UTC) Received: from [192.168.0.93] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 8022B6276E; Thu, 12 Dec 2013 13:36:17 +0100 (CET) Message-ID: <52A9AD9C.2090200@peterschmitt.fr> Date: Thu, 12 Dec 2013 13:35:40 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: Andriy Gapon Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> In-Reply-To: <52A9ABEF.8080509@FreeBSD.org> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="sfBjuTduSoVGmHQQo0sek4mRaJSo1qcie" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 12:35:58 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --sfBjuTduSoVGmHQQo0sek4mRaJSo1qcie Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Le 12/12/2013 13:28, Andriy Gapon a =E9crit : > on 12/12/2013 14:21 Florent Peterschmitt said the following: >> Le 12/12/2013 12:08, Andriy Gapon a =E9crit : >>> on 10/12/2013 12:22 Florent Peterschmitt said the following: >>>> Hi, >>>> >>>> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was = >>>> upgraded from 9.2-RELEASE? >>>> >>>> I have two servers, with very different hardware (on is with soft ra= id >>>> and the other have not) and after a zpool upgrade, no way to get the= >>>> server booting. >>>> >>>> Do I miss something when upgrading? >>>> >>>> I cannot get the error message for the moment. I reinstalled the rai= d=20 >>>> server under Linux and the other one is waiting for getting a vKVM=85= >>>> >>> >>> Apologies if I missed it, but a few words about your pool configurati= on >>> and the hardware that it uses would not hurt. >>> >> >> Yes, sorry. You can find here the script I use to install: >> >> https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh >> >=20 > Is there a more readable way to describe the configuration and the _har= dware_ > than this script? >=20 That's all I can give at the moment, i'll send later all informations about the zpool and zfs sets. For the hardware, I don't know how it could make the zpool not working. I was running BETA4 without hardware problems, it is just after the zpool upgrade the system is unreachable. And I have _no_ way to watch the system booting since the hoster is un trouble=85 Anyway, I'll also send a dmesg in the same time. --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --sfBjuTduSoVGmHQQo0sek4mRaJSo1qcie Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqa2cAAoJEFr01BkajbiBgoMQAI4MEG/7sDfoFy/xX1u6xvSy zt9nAZ5vpKeA6cAMniILFQcUM69RSIpkcqCWQdVE8jY6PvB/y3RMzCuRYXd6voJl muAsJ0PtaHiEJJ2DTUR6ULqgruDdFyDfiRpQmhhD0KtkOPU75vRGP1ogtUS/ETZG H0OFIpOJUty77xESpwXO8JX0oUdpmdRwreOVIzeaAKNy2JzLXRZAvYapMxTH9M6F TqbpO2ka67LZIvcg/U17Q0u5SiDeutCrGj0AjDIAuKdssU0hLfIGOYY1jGiwgsXy 4vvamKcCMct4WQMCZC5Dtlr3meJNKPB3sARKF4hcT8FjS5MMh6XzZEHzsy1HMEtb 64QX/AAEAiYt30LVm8bx4gbwNNpLN/V3wTVZh9ZPEOV8q4HwMm/3WWAo9DpcxZhS 8Y0/vmyJlTEnUok2j+ebktF0Y/xDZJDgApklFuJD5fqZACBrIfnx2ZS6ZqHKMIfD HrnejilxukLLY1WGyVhZSDrD1oX+DIlZ4Zw/nsyDBJ7kbQVzlieccHFsQ+3X9Asl /pZLivYWJbpI0PTt7pUDZ54YXNrSNxFMmT6VAocMEOTetUe3mHtZSbLMF/Bz2m20 KlhFxOYBOlduKCSigNL29RXePxlbuqPDxlOXVZ8EYezXZCQuIF3usWbLNwLd7Ckt qhrqi2nfIrvuCQtIXq8A =cOfY -----END PGP SIGNATURE----- --sfBjuTduSoVGmHQQo0sek4mRaJSo1qcie-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 16:11:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4472DCCC; Thu, 12 Dec 2013 16:11:16 +0000 (UTC) Received: from lwfs1-cam.cam.lispworks.com (mail.lispworks.com [193.34.186.230]) by mx1.freebsd.org (Postfix) with ESMTP id B7CAE1F57; Thu, 12 Dec 2013 16:11:15 +0000 (UTC) Received: from higson.cam.lispworks.com (higson.cam.lispworks.com [192.168.1.7]) by lwfs1-cam.cam.lispworks.com (8.14.5/8.14.5) with ESMTP id rBCG0Elv095720; Thu, 12 Dec 2013 16:00:14 GMT (envelope-from martin@lispworks.com) Received: from higson.cam.lispworks.com (localhost.localdomain [127.0.0.1]) by higson.cam.lispworks.com (8.14.4) id rBCFePVU013824; Thu, 12 Dec 2013 15:40:25 GMT Received: (from martin@localhost) by higson.cam.lispworks.com (8.14.4/8.14.4/Submit) id rBCFePGB013820; Thu, 12 Dec 2013 15:40:25 GMT Date: Thu, 12 Dec 2013 15:40:25 GMT Message-Id: <201312121540.rBCFePGB013820@higson.cam.lispworks.com> From: Martin Simmons To: Florent Peterschmitt In-reply-to: <52A9AD9C.2090200@peterschmitt.fr> (message from Florent Peterschmitt on Thu, 12 Dec 2013 13:35:40 +0100) Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> <52A9AD9C.2090200@peterschmitt.fr> MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, avg@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 16:11:16 -0000 >>>>> On Thu, 12 Dec 2013 13:35:40 +0100, Florent Peterschmitt said: > > Le 12/12/2013 13:28, Andriy Gapon a écrit : > > on 12/12/2013 14:21 Florent Peterschmitt said the following: > >> Le 12/12/2013 12:08, Andriy Gapon a écrit : > >>> on 10/12/2013 12:22 Florent Peterschmitt said the following: > >>>> Hi, > >>>> > >>>> Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was > >>>> upgraded from 9.2-RELEASE? > >>>> > >>>> I have two servers, with very different hardware (on is with soft raid > >>>> and the other have not) and after a zpool upgrade, no way to get the > >>>> server booting. > >>>> > >>>> Do I miss something when upgrading? > >>>> > >>>> I cannot get the error message for the moment. I reinstalled the raid > >>>> server under Linux and the other one is waiting for getting a vKVMÂ… > >>>> > >>> > >>> Apologies if I missed it, but a few words about your pool configuration > >>> and the hardware that it uses would not hurt. > >>> > >> > >> Yes, sorry. You can find here the script I use to install: > >> > >> https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh > >> > > > > Is there a more readable way to describe the configuration and the _hardware_ > > than this script? > > > > That's all I can give at the moment, i'll send later all informations > about the zpool and zfs sets. > > For the hardware, I don't know how it could make the zpool not working. > I was running BETA4 without hardware problems, it is just after the > zpool upgrade the system is unreachable. And I have _no_ way to watch > the system booting since the hoster is un troubleÂ… > > Anyway, I'll also send a dmesg in the same time. Did you rerun the gpart bootcode command after installing FreeBSD 10? If not, maybe the 9.2 bootcode can't handle the upgraded pool? If you did rerun it, check that /boot/gptzfsboot doesn't exceed the size of the partition (your zfs.sh uses -s 128 = 64k). __Martin From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 18:13:31 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 82C61C31; Thu, 12 Dec 2013 18:13:31 +0000 (UTC) Received: from mail-vc0-x22a.google.com (mail-vc0-x22a.google.com [IPv6:2607:f8b0:400c:c03::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 1964B19BE; Thu, 12 Dec 2013 18:13:31 +0000 (UTC) Received: by mail-vc0-f170.google.com with SMTP id la4so558550vcb.15 for ; Thu, 12 Dec 2013 10:13:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=JKXVFTdyS/65CEOcpfD7UFn5c9QaGDy4nd9jXPKMrjQ=; b=n9jJ7XwkrLit8OwcKJPZTlXKyOhs6lSXvSDCUg3ZdMdqquTz0x65KP/llOUyHI1X0k QpvKORIZPjqv+mDEtFgECq+/tBMBqL84HMCymKF8T1hqD50yaZJOxaFHfiOZGvPoCmBW LrrP+9q4kGozxCJBjHpJGPNQjqHFwfT6zVAQbDKdTH/tcLG5va4LnlHPdyg6WaL2tadb kQJkZ4luyjC2571EfW+VrK4D6E5miADz4QX5LzNu3Y/GFMTcRvyjjORuuLKH6XX0nwdM lMD1qyrj2yRJbJqujq+XvMx4FiuphXyo5k23S4IFqRlt0LuOU3AJHkjCyyDxz5aREPPu dc3w== MIME-Version: 1.0 X-Received: by 10.220.172.8 with SMTP id j8mr58883vcz.79.1386872010235; Thu, 12 Dec 2013 10:13:30 -0800 (PST) Sender: artemb@gmail.com Received: by 10.221.9.2 with HTTP; Thu, 12 Dec 2013 10:13:30 -0800 (PST) In-Reply-To: <52A9ABEF.8080509@FreeBSD.org> References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> Date: Thu, 12 Dec 2013 10:13:30 -0800 X-Google-Sender-Auth: XpnDzsT59dF1GsoDyhp-Iffj_b0 Message-ID: Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure From: Artem Belevich To: Andriy Gapon Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 18:13:31 -0000 On Thu, Dec 12, 2013 at 4:28 AM, Andriy Gapon wrote: >> Yes, sorry. You can find here the script I use to install: >> >> https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh >> >From zfs.sh: ># Zero ZFS sectors >dd if=/dev/zero of=/dev/${ada}p3 count=560 bs=512 That destroys only half of ZFS uberblocks. The other half is placed at the top of the vdev slices in the pool. There were few reports recently that such 'orphan' ZFS uberblocks were messing up boot on recent FreeBSD versions. Considering lack of details on how exactly your boot fails, this may or may not be the issue in your case. do "zdb -l /dev/ada0" (and all other slices on ada0) and check whether it reports anything unexpected. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 20:13:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AE6D5828; Thu, 12 Dec 2013 20:13:18 +0000 (UTC) Received: from smtp.peterschmitt.fr (gentiane.peterschmitt.fr [213.239.219.91]) by mx1.freebsd.org (Postfix) with ESMTP id 09994139A; Thu, 12 Dec 2013 20:13:18 +0000 (UTC) Received: from [10.3.3.27] (89-159-92-164.rev.dartybox.com [89.159.92.164]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id A0DD36275F; Thu, 12 Dec 2013 21:13:25 +0100 (CET) Message-ID: <52AA26DA.30809@peterschmitt.fr> Date: Thu, 12 Dec 2013 21:12:58 +0000 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Artem Belevich , Andriy Gapon Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="1GNg9HGTH594fBa6B0N9X6eT3MKBfTrDH" Cc: freebsd-fs , freebsd-stable stable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 20:13:18 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --1GNg9HGTH594fBa6B0N9X6eT3MKBfTrDH Content-Type: multipart/mixed; boundary="------------010503040006050903090902" This is a multi-part message in MIME format. --------------010503040006050903090902 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Le 12/12/2013 18:13, Artem Belevich a =E9crit : > On Thu, Dec 12, 2013 at 4:28 AM, Andriy Gapon wrote: >>> Yes, sorry. You can find here the script I use to install: >>> >>> https://github.com/Leryan/freebsd-zfs-install/blob/master/zfs.sh >>> >=20 > From zfs.sh: >> # Zero ZFS sectors >> dd if=3D/dev/zero of=3D/dev/${ada}p3 count=3D560 bs=3D512 >=20 > That destroys only half of ZFS uberblocks. The other half is placed at > the top of the vdev slices in the pool. > There were few reports recently that such 'orphan' ZFS uberblocks were > messing up boot on recent FreeBSD versions. >=20 > Considering lack of details on how exactly your boot fails, this may > or may not be the issue in your case. >=20 > do "zdb -l /dev/ada0" (and all other slices on ada0) and check > whether it reports anything unexpected. >=20 > --Artem rescue-bsd# zdb -l /dev/ada0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 Well=85 this sounds bad, right? I joined the dmesg of the last successful boot. Also here is the zfs list: NAME USED AVAIL REFER MOUNTPOINT tank 246G 203G 31K none tank/root 246G 203G 697M /mnt tank/root/tmp 830K 203G 830K /mnt/tmp tank/root/usr 244G 203G 278M /mnt/usr tank/root/usr/local 244G 203G 244G /mnt/usr/local tank/root/var 1.39G 203G 1.39G /mnt/var There is no snapshots but I used snapshots some times. rescue-bsd# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 gpt/zfs-root ONLINE 0 0 0 errors: No known data errors And zpool get all joined. Currently the system is booted in rescue with a 9.2-RELEASE amd64. Cannot do best. --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --------------010503040006050903090902 Content-Type: text/plain; charset=UTF-8; name="dmesg-beta3.txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="dmesg-beta3.txt" Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved.= FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-BETA3 #0 r257580: Sun Nov 3 19:43:01 UTC 2013 root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610 module_register: module pci/em already exists! Module pci/em failed to register: 17 module_register: module pci/lem already exists! Module pci/lem failed to register: 17 CPU: Intel(R) Atom(TM) CPU N2800 @ 1.86GHz (1866.71-MHz K8-class CPU) Origin =3D "GenuineIntel" Id =3D 0x30661 Family =3D 0x6 Model =3D 0x= 36 Stepping =3D 1 Features=3D0xbfebfbff Features2=3D0x40e39d AMD Features=3D0x20100800 AMD Features2=3D0x1 TSC: P-state invariant, performance statistics real memory =3D 2147483648 (2048 MB) avail memory =3D 2030194688 (1936 MB) Event timer "LAPIC" quality 600 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs FreeBSD/SMP: 1 package(s) x 2 core(s) x 2 HTT threads cpu0 (BSP): APIC ID: 0 cpu1 (AP/HT): APIC ID: 1 cpu2 (AP): APIC ID: 2 cpu3 (AP/HT): APIC ID: 3 ioapic0: Changing APIC ID to 8 ioapic0 irqs 0-23 on motherboard lapic0: Forcing LINT1 to edge trigger random: initialized kbd1 at kbdmux0 acpi0: on motherboard acpi0: Power Button (fixed) Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 450 Event timer "HPET1" frequency 14318180 Hz quality 440 Event timer "HPET2" frequency 14318180 Hz quality 440 acpi0: reservation of 0, 4000 (3) failed cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 atrtc0: port 0x70-0x77 irq 8 on acpi0 atrtc0: Warning: Couldn't map I/O. Event timer "RTC" frequency 32768 Hz quality 0 attimer0: port 0x40-0x43,0x50-0x53 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 Timecounter "ACPI-safe" frequency 3579545 Hz quality 850 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 vgapci0: port 0x30d0-0x30d7 mem 0x80500000-0x805= fffff irq 16 at device 2.0 on pci0 pcib1: at device 28.0 on pci0 pci1: on pcib1 em0: port 0x2000-0x201f mem = 0x80400000-0x8041ffff,0x80000000-0x803fffff,0x80420000-0x80423fff irq 16 = at device 0.0 on pci1 em0: Using MSIX interrupts with 3 vectors em0: Ethernet address: 00:22:4d:aa:77:45 uhci0: port 0x30a0-0x30bf irq = 23 at device 29.0 on pci0 uhci0: LegSup =3D 0x2f00 usbus0 on uhci0 uhci1: port 0x3080-0x309f irq = 19 at device 29.1 on pci0 uhci1: LegSup =3D 0x2f00 usbus1 on uhci1 uhci2: port 0x3060-0x307f irq = 18 at device 29.2 on pci0 uhci2: LegSup =3D 0x2f00 usbus2 on uhci2 uhci3: port 0x3040-0x305f irq = 16 at device 29.3 on pci0 uhci3: LegSup =3D 0x2f00 usbus3 on uhci3 ehci0: mem 0x80600400-0x80600= 7ff irq 23 at device 29.7 on pci0 usbus4: EHCI version 1.0 usbus4 on ehci0 pcib2: at device 30.0 on pci0 pci2: on pcib2 isab0: at device 31.0 on pci0 isa0: on isab0 ahci0: port 0x30c8-0x30cf,0x30dc-0x30df= ,0x30c0-0x30c7,0x30d8-0x30db,0x3020-0x302f mem 0x80600000-0x806003ff irq = 19 at device 31.2 on pci0 ahci0: AHCI v1.10 with 4 3Gbps ports, Port Multiplier not supported ahcich0: at channel 0 on ahci0 ahcich1: at channel 1 on ahci0 pci0: at device 31.3 (no driver attached) acpi_button0: on acpi0 acpi_button1: on acpi0 ppc0: port 0x378-0x37f irq 7 on acpi0 ppc0: Generic chipset (NIBBLE-only) in COMPATIBLE mode ppbus0: on ppc0 lpt0: on ppbus0 lpt0: Interrupt-driven port ppi0: on ppbus0 uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0 uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 orm0: at iomem 0xcf800-0xd07ff,0xd0800-0xd17ff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=3D0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0= atkbdc0: at port 0x60,0x64 on isa0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] est0: on cpu0 p4tcc0: on cpu0 est1: on cpu1 p4tcc1: on cpu1 est2: on cpu2 p4tcc2: on cpu2 est3: on cpu3 p4tcc3: on cpu3 fuse-freebsd: version 0.4.4, FUSE ABI 7.8 ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is pr= esent; to enable, add "vfs.zfs.prefetch_disable=3D0" to /boot/loader= =2Econf. ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Timecounters tick every 1.000 msec random: unblocking device. usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 12Mbps Full Speed USB v1.0 usbus4: 480Mbps High Speed USB v2.0 ugen2.1: at usbus2 uhub0: on usbus2 ugen1.1: at usbus1 uhub1: on usbus1 ugen0.1: at usbus0 uhub2: on usbus0 ugen4.1: at usbus4 uhub3: on usbus4 ugen3.1: at usbus3 uhub4: on usbus3 ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 ada0: ATA-8 SATA 3.x device ada0: Serial Number 23EJTDVPS ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 476940MB (976773168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 Netvsc initializing... lapic2: Forcing LINT1 to edge trigger SMP: AP CPU #2 Launched! lapic1: Forcing LINT1 to edge trigger SMP: AP CPU #1 Launched! lapic3: Forcing LINT1 to edge trigger SMP: AP CPU #3 Launched! Timecounter "TSC" frequency 1866712127 Hz quality 1000 Root mount waiting for: usbus4 usbus3 usbus2 usbus1 usbus0 uhub1: 2 ports with 2 removable, self powered uhub0: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered Root mount waiting for: usbus4 Root mount waiting for: usbus4 Root mount waiting for: usbus4 uhub3: 8 ports with 8 removable, self powered ugen4.2: at usbus4 uhub5: o= n usbus4 Root mount waiting for: usbus4 uhub5: 4 ports with 4 removable, self powered Trying to mount root from zfs:tank/root []... --------------010503040006050903090902 Content-Type: text/plain; charset=UTF-8; name="zpool-get-all.txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="zpool-get-all.txt" rescue-bsd# zpool get all NAME PROPERTY VALUE = SOURCE tank size 456G = - tank capacity 53% = - tank altroot /mnt = local tank health ONLINE = - tank guid 14109252772653171024 = default tank version - = default tank bootfs tank/root = local tank delegation on = default tank autoreplace off = default tank cachefile /tmp/zpool.cache = local tank failmode wait = default tank listsnapshots off = default tank autoexpand off = default tank dedupditto 0 = default tank dedupratio 1.00x = - tank free 210G = - tank allocated 246G = - tank readonly off = - tank comment - = default tank expandsize 0 = - tank freeing 0 = default tank feature@async_destroy enabled = local tank feature@empty_bpobj enabled = local tank feature@lz4_compress enabled = local tank unsupported@com.joyent:multi_vdev_crash_dump inactive = local --------------010503040006050903090902-- --1GNg9HGTH594fBa6B0N9X6eT3MKBfTrDH Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqibfAAoJEFr01BkajbiBwnsP/1G5NGWTTh2V5el3XDNbP/oq CSCcyL/grqRavbmaglRHy3/rY3jAsROOHrfg+eBRWadvS8UKuFpcfkYURnSbOjWs TN6F1d+aO6WqG+WPC4nA42p/WKHrGRlUjFGh41PiD1QtjqydxwF4dZa0BJK96B8f tb8Txd0BsXVrfLm4BEd1KdhLODRk8NpPUuOuRMjO6NHk4tZYNZBFUWeWX2xjv+bb ZcfIYtQXeOlEtBFjnjXY9IdEMV1Gmu97bqKmOmd8oQOLbAOfxfjif4jQ+3y+kQtc PnDNXl+94bjM9XqPG2kTSGYNcYr2V43uZBpp9VMPmDNEE9cet5TCHEqPkKoeaZT4 zwPBVXEqsBcyJa+nl9hdjanergzRRZkDi0r0Tq+jV2a0fnr8NmJxOfTrZ4FGbqxq Vbqm55z0u2VNveTiF64zFLKJ+5F7hoitPLjEjlLU3Al1FcqRgXPOPAlzkG8Zdyjd vbxPgAqt1L9aiO6dd7FXHGKWBzNJJJ/E9mQbGmcWFzbEOuhKHt/LV9duAGX4netg mVqloZ/j8roDO35JCBwSw4O4i70WaIT8U0/e8IuzUMMp6dchSxwR5vF7Hh4c4IhS wfeoZXeS5xR6Sz3CpKhez+/7YQsJRZfwWnntvPkGcR+JaaR10TYeP/uElrGKJaFG g+3o22P00+cnliQ/uQUG =s2dV -----END PGP SIGNATURE----- --1GNg9HGTH594fBa6B0N9X6eT3MKBfTrDH-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 20:15:42 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 50FC39C7 for ; Thu, 12 Dec 2013 20:15:42 +0000 (UTC) Received: from bronze.cs.yorku.ca (bronze.cs.yorku.ca [130.63.95.34]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 07E7A13D9 for ; Thu, 12 Dec 2013 20:15:41 +0000 (UTC) Received: from [130.63.97.125] (ident=jas) by bronze.cs.yorku.ca with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.76) (envelope-from ) id 1VrCfV-0000vN-DJ; Thu, 12 Dec 2013 15:15:33 -0500 Message-ID: <52AA1965.9080709@cse.yorku.ca> Date: Thu, 12 Dec 2013 15:15:33 -0500 From: Jason Keltz User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Rick Macklem Subject: Re: mount ZFS snapshot on Linux system References: <116973401.29503791.1386804115064.JavaMail.root@uoguelph.ca> In-Reply-To: <116973401.29503791.1386804115064.JavaMail.root@uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -1.0 X-Spam-Level: - X-Spam-Report: Content preview: On 12/11/2013 06:21 PM, Rick Macklem wrote: > Jason Keltz wrote: >> On 10/12/2013 7:21 PM, Rick Macklem wrote: >>> Jason Keltz wrote: >>>> I'm running FreeBSD 9.2 with various ZFS datasets. >>>> I export a dataset to a Linux system (RHEL64), and mount it. It >>>> works >>>> fine... >>>> When I try to access the ZFS snapshot directory on the Linux NFS >>>> client, >>>> things go weird. >>>> >>>> With NFSv4: >>>> >>>> [jas@archive /]# cd /mnt/.zfs/snapshot >>>> [jas@archive snapshot]# ls >>>> 20131203 20131205 20131206 20131207 20131208 20131209 >>>> 20131210 >>>> [jas@archive snapshot]# cd 20131210 >>>> 20131210: Not a directory. >>>> >>>> huh? >>>> >>>> [jas@archive snapshot]# ls -al >>>> total 77 >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 >>>> [jas@archive snapshot]# stat * >>>> [jas@archive snapshot]# ls -al >>>> total 292 >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >>>> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 >>>> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 >>>> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 >>>> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 >>>> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 >>>> >>>> But it gets even more fun.. >>>> >>>> # ls -ali >>>> total 205 >>>> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >>>> 4 drwxr-xr-x 381 [...] Content analysis details: (-1.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SHORTCIRCUIT Not all rules were run, due to a shortcircuited rule -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 20:15:42 -0000 On 12/11/2013 06:21 PM, Rick Macklem wrote: > Jason Keltz wrote: >> On 10/12/2013 7:21 PM, Rick Macklem wrote: >>> Jason Keltz wrote: >>>> I'm running FreeBSD 9.2 with various ZFS datasets. >>>> I export a dataset to a Linux system (RHEL64), and mount it. It >>>> works >>>> fine... >>>> When I try to access the ZFS snapshot directory on the Linux NFS >>>> client, >>>> things go weird. >>>> >>>> With NFSv4: >>>> >>>> [jas@archive /]# cd /mnt/.zfs/snapshot >>>> [jas@archive snapshot]# ls >>>> 20131203 20131205 20131206 20131207 20131208 20131209 >>>> 20131210 >>>> [jas@archive snapshot]# cd 20131210 >>>> 20131210: Not a directory. >>>> >>>> huh? >>>> >>>> [jas@archive snapshot]# ls -al >>>> total 77 >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 >>>> [jas@archive snapshot]# stat * >>>> [jas@archive snapshot]# ls -al >>>> total 292 >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >>>> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 >>>> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 >>>> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 >>>> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 >>>> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 >>>> >>>> But it gets even more fun.. >>>> >>>> # ls -ali >>>> total 205 >>>> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . >>>> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. >>>> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 >>>> >>>> This is not a user id mapping issue because all the files in /mnt >>>> have >>>> the proper owner/groups, and I can access them there fine. >>>> >>>> I also tried explicitly exporting .zfs/snapshot. The result isn't >>>> any >>>> different. >>>> >>>> If I use nfs v3 it "works", but I'm seeing a whole lot of errors >>>> like >>>> these in syslog: >>>> >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >>>> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >>>> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >>>> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for >>>> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument >>>> >>>> It's not clear to me why this doesn't just "work". >>>> >>>> Can anyone provide any advice on debugging this? >>>> >>> As I think you already know, I know nothing about ZFS and never >>> use it. >> Yup! :) >>> Having said that, I suspect that there are filenos (i-node #s) >>> that are the same in the snapshot as in the parent file system >>> tree. >>> >>> The basic assumptions are: >>> - within a file system, all i-node# are unique (represent one file >>> object only) and all file objects have the same fsid >>> - when the fsid changes, that indicates a file system boundary and >>> fileno (i-node#s) can be reused in the subtree with a different >>> fsid >>> >>> For NFSv3, the server should export single volumes only (all >>> objects >>> have the same fsid and the filenos are unique). This is indicated >>> to >>> the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and >>> friends. >>> >>> For NFSv4, the server does export multiple volumes and the boundary >>> is indicated by a change in fsid value. >>> >>> I suspect ZFS snaphots don't obey the above in some way, but that >>> is >>> just a hunch. >>> >>> Now, how to narrow this down... >>> - Do the above tests (both NFSv4 and NFSv3) and capture the >>> packets, >>> then look at them in wireshark. In particular, look at the >>> fileid numbers >>> and fsid values for the various directories under .zfs. >> I gave this a shot, but I haven't used wireshark to capture NFS >> traffic >> before, so if I need to provide additional details, let me know.. >> >> NFSv4: >> >> For /mnt/.zfs/snapshot/20131203: >> fileid=4 >> fsid4.major=1446349656 >> fsid4.minor=222 >> >> For /mnt/.zfs/snapshot/20131205: >> fileid=4 >> fsid4.major=1845998066 >> fsid4.minor=222 >> >> For /mnt/jas: >> fileid=144 >> fsid4.major=597946950 >> fsid4.minor=222 >> >> For /mnt/jas1: >> fileid=338 >> fsid4.major=597946950 >> fsid4.minor=222 >> >> So fsid is the same for all the different "data" directories, which >> is >> what I would expect given what you said. I guess each snapshot is >> seen >> as a unique filesystem... but then a repeating inode in different >> filesystems shouldn't be a problem... >> > Yes, it appears that each snapshot is represented as a different file > system. As such, NFSv4 should work for these, but there is an additional > property of the "root" of each of these (20131203, ...). > When the directory .zfs/snapshot is read, the fileno for 20131203 should > be different than the fileno returned by VOP_GETATTR()/stat() for "20131203". > (The old "mounted-on" vs "root-of-mounted-fs" vnodes which you get for a > "mount point".) > For NFSv4, the server returns the fileno in the VOP_READDIR() dirent as a > separate attribute called mounted_on_fileid vs the value returned by VOP_GETATTR() > as the fileid attribute. > If the value of these 2 attributes is the same, it is not a "mount point". > > So, maybe you could take another look at the packet capture in wireshark > and see what the fileid and mounted_on_fileid attributes are? Unfortunately, I didn't save the log, but it was easy enough to regenerate. But before we go there, I've spent a lot of time experimenting with this, so I can say... If I NFSv4 mount nfs-server:/local/backup/home9 to /mnt, then I: cd /mnt/.zfs/snapshot/20131203 ... it works great! I can change into any user directory, list files, etc. If I then: cd /mnt/.zfs/snapshot/20131205 .. it also works great! But... if I cd into /mnt/.zfs/snapshot, the free ride is over... all the snapshot directories appear as files and the problem is there. ... unless I unmount and remount, in which case I can repeat. I also found that a change of kernel from 2.6.32-358.14.1.el6 (the kernel I was running with RHEL6.4) to 2.6.32-431.el6 (the kernel that comes with RHEL6.5) does actually change something important.... If I mount nfs-server:/local/backup/home9 and try to change into "/mnt/.zfs/snapshot" with the new kernel, I still have the problem. Likewise, if I try to mount nfs-server:/local/backup/home9/.zfs, and change into "/mnt/snapshot", I also have the problem. If I mount nfs-server:/local/backup/home9/.zfs/snapshot and change into "/mnt", I stil have the older problem, but with the RH 6.4 kernel in place. However, if I do the same mount with the newer kernel, it now works. I can "ls" and see the snapshot directories. I can change into any of them, then "cd .." and change into another one. I tested this on two systems - one where I just installed the entire 6.5 upgrade, and the other where I just installed the kernel from 6.5 on the 6.4 system so it seems related to the kernel. It's still not clear why I can't just mount nfs-server:/local/backup/home9 on RHEL6.5, and the NFSv4 server figures it out. I did try from another FreeBSD client, and I can mount the tree at any point, and the NFS server is happy. This makes me believe it's probably a RHEL NFSv4 bug. Here's the numbers.. NFSv4: So, if I try to access the snapshot path directly, on the way ... .zfs: V4 LOOKUP fsid.major: 597946950 fileid: 1 fattr owner/group are root - correct snapshot: V4 LOOKUP fsid.major: 597946950 fileid: 2 fattr owner/group are root - correct If I access /.zfs/snapshot/20131203 directly...: 20131203: V4 LOOKUP fsid.major: 1446349656 fileid: 4 fattr owner/group are root - correct V4 READDIR snapshot, 20121203 entry: fsid.major: 597946950 <-- ???? fattr4_fileid: 863 fattr4_owner/group refers to a group on our system (the one displayed in ls sometimes).. FATTR4_MOUNTED_ON_FILEID: 0x000000000000035f But if I ls /mnt/.zfs/snapshot: V4 LOOKUP: 201203: fsid.major: 597946950 fileid: 4 V4 READDIR: fsid4.major: 597946950 fattr4_fileid: 863 fattr4_mounted_on_fileid: 0x000000000000035f >> NFSv3: >> >> For /mnt/.zfs/snapshot/20131203: >> fileid=4 >> fsid=0x0000000056358b58 >> >> For /mnt/.zfs/snapshot/20131205: >> fileid=4 >> fsid=0x000000006e07b1f2 >> >> For /mnt/jas >> fileid=144 >> fsid=0x0000000023a3f246 >> >> For /mnt/jas1: >> fileid=338 >> fsid=0x0000000023a3f246 >> >> Here, it seems it's the same, even though it's NFSv3... hmm. >> >> >>> - Try mounting the individual snapshot directory, like >>> .zfs/snapshot/20131209 and see if that works (for both NFSv3 >>> and NFSv4). >> Hmm .. I tried this: >> >> /local/backup/home9/.zfs/snapshot/20131203 -ro >> archive-mrpriv.cs.yorku.ca >> V4: / >> >> ... but syslog reports: >> >> Dec 10 22:28:22 jungle mountd[85405]: can't export >> /local/backup/home9/.zfs/snapshot/20131203 >> > mountd will do a VFS_CHECKEXP(), which seems to fail for > these (which also explains the error messages). To be honest, > with these failing, remote access should fail. > > Also, since NFSv3 exported volumes should not cross > "mount points" (anywhere the fsid changes), all a mount > above .zfs/snapshot/20131203 should get are a bunch of > empty directories called 20131203,... I tried again just in case I missed something... nfs-server:/local/backup/home9 on /mnt type nfs (ro,vers=3,addr=172.16.2.26) I can change into /mnt/.zfs/snapshot/20131203/jas and list the directory, or less a file. > For example, if in the UFS world with a separate > file systems /sub1 and /sub1/sub2 with both exported: > - an NFSv3 mount of /sub1 on /mnt would see an empty > directory "sub2" when looking in /mnt. (Actually it > isn't necessarily empty. It might have whatever is in > the directory when /sub1/sub2 is not mounted.) > > This seems pretty obviously broken for ZFS, but I think > it needs to be fixed in ZFS and I have no idea how to do > that, since I don`t know if snapshots are real mount points, etc. > >> ... and of course I can't mount from either v3/v4. >> >> On the other hand, I kept it as: >> >> /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca >> V4:/ >> >> ... and was able to NFSv4 mount >> /local/backup/home9/.zfs/snapshot/20131203, and this does indeed >> work. >> > Yes, although technically it should not work unless 20131203 is > exported. Hmm.. I thought that this line in the exports man page meant that it was okay: "Because NFSv4 does not use the mount protocol, the ``administrative controls'' are not applied. Thus, all the above export line(s) should be considered to have the -alldirs flag, even if the line is specified without it." > However, it is probably the easiest work around until this is fixed > someday. > So, just to make sure I am clear on this... > A NFSv4 mount of the snapshot works ok, even for a Linux client mount. Yes. Although with the new kernel, I can mount nfs-server:/local/backup/home9/.zfs/snapshot now as well... which is neat because it solves the problem I was trying to solve.. I wanted users to be able to view their own snapshots, but not the snapshots of other users... Now, on the archive server, I can mount the snapshot dir via NFSv4, then, through autofs I am able to run a shell script that bind mounts the users own individual snapshot directories from the NFSv4 mount into one directory. I then provide chrooted sftp access to that directory for users to get at their files. A user now sees "20131203 20131204..." when they sftp in.. >>> - Try doing the mounts with a FreeBSD client and see if you get the >>> same >>> behaviour? >> I found this: >> http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ >> .. implies it will work from FreeBSD/Nexenta, just not Linux. > I suspect this might be the mounted_on_fileid vs fileid issue. > (ie, The Linux client needs this to be done correctly, but the other > clients figure it out.) > > One case that might break for FreeBSD would be to cd into a snapshot > and then do a pwd with the debug.disablecwd sysctl set to 1. > > Hopefully the ZFS wizards are reading this, rick Me too! Jason. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 22:04:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5A646E2F; Thu, 12 Dec 2013 22:04:19 +0000 (UTC) Received: from mail-vb0-x236.google.com (mail-vb0-x236.google.com [IPv6:2607:f8b0:400c:c02::236]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id E36401D75; Thu, 12 Dec 2013 22:04:18 +0000 (UTC) Received: by mail-vb0-f54.google.com with SMTP id g10so765169vbg.27 for ; Thu, 12 Dec 2013 14:04:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=0HFd4eREWcbOKxgPWJ6OWUH/whb7uaLOl8zUrFzBJN0=; b=f+oFZc/OImnluvyu6uq1Kw/jX7BZ/n3ydioAMNjwUefmh5qMVhTW3VzE3c1L6PoM+F W4V4AamaiXiC4HX0QqXiF7CNtMc8Sll0IpEmdaveEE37W/WR43CeIVlB6ShqHkGcBupr XNbn1di+q6v5ssVS3Qmt0YiQADhS14bdj9J0gnWAa4IRHuhi8H/I5gXlxDOxetg+NjY1 Nezet9YhaHcO4xxcOUdu/g2/hzs3XoxsewAGpDaQOP6/PwAbJ3UceP5dC2KrHRaC0m8S KiNNw60cxAxKm5iDwceR+pnxLCO9rbvU3Xla6O5xac3ye1bjW2E+nHdl0s2mPRBULfbR AcjQ== MIME-Version: 1.0 X-Received: by 10.220.99.72 with SMTP id t8mr4977598vcn.10.1386885858128; Thu, 12 Dec 2013 14:04:18 -0800 (PST) Sender: artemb@gmail.com Received: by 10.221.9.2 with HTTP; Thu, 12 Dec 2013 14:04:18 -0800 (PST) In-Reply-To: <52AA26DA.30809@peterschmitt.fr> References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> <52AA26DA.30809@peterschmitt.fr> Date: Thu, 12 Dec 2013 14:04:18 -0800 X-Google-Sender-Auth: y39z1U8UriqqOUinpd5TOLdsJYU Message-ID: Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure From: Artem Belevich To: Florent Peterschmitt Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs , freebsd-stable stable , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 22:04:19 -0000 On Thu, Dec 12, 2013 at 1:12 PM, Florent Peterschmitt wrote: >> do "zdb -l /dev/ada0" (and all other slices on ada0) and check >> whether it reports anything unexpected. >> >> --Artem > > rescue-bsd# zdb -l /dev/ada0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > > > Well=85 this sounds bad, right? This looks the way it's supposed to -- no unwanted ZFS pool info is found. Now repeat that for all ada0p? and make sure only the slice that's part of your pool shows ZFS labels and only for one pool. Think a bit about how bootloader figures out how your pool is built. All it has access to is a raw disk and partition table. So in order to find the pool it probes raw disk and all partitions trying to find ZFS labels and then uses info in those labels to figure out pool configuration. If bootloader finds stale ZFS labels left from a previous use of the disk in some other pool, it would potentially mess up detection of your real boot pool. --Artem From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 22:16:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5F92F6EA; Thu, 12 Dec 2013 22:16:54 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id BC6241EA7; Thu, 12 Dec 2013 22:16:53 +0000 (UTC) Received: from [10.3.3.27] (89-159-92-164.rev.dartybox.com [89.159.92.164]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 2955F6275F; Thu, 12 Dec 2013 23:17:13 +0100 (CET) Message-ID: <52AA43E3.7020706@peterschmitt.fr> Date: Thu, 12 Dec 2013 23:16:51 +0000 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 MIME-Version: 1.0 To: Artem Belevich Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> <52AA26DA.30809@peterschmitt.fr> In-Reply-To: X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="tVSpilNHrmUUa2nDwWtXcodpOMPvi4kfF" Cc: freebsd-fs , freebsd-stable stable , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 22:16:54 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --tVSpilNHrmUUa2nDwWtXcodpOMPvi4kfF Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Le 12/12/2013 22:04, Artem Belevich a =E9crit : > On Thu, Dec 12, 2013 at 1:12 PM, Florent Peterschmitt > wrote: >>> do "zdb -l /dev/ada0" (and all other slices on ada0) and check >>> whether it reports anything unexpected. >>> >>> --Artem >> >> rescue-bsd# zdb -l /dev/ada0 >> -------------------------------------------- >> LABEL 0 >> -------------------------------------------- >> failed to unpack label 0 >> -------------------------------------------- >> LABEL 1 >> -------------------------------------------- >> failed to unpack label 1 >> -------------------------------------------- >> LABEL 2 >> -------------------------------------------- >> failed to unpack label 2 >> -------------------------------------------- >> LABEL 3 >> -------------------------------------------- >> failed to unpack label 3 >> >> >> Well=85 this sounds bad, right? >=20 > This looks the way it's supposed to -- no unwanted ZFS pool info is fou= nd. >=20 > Now repeat that for all ada0p? and make sure only the slice that's > part of your pool shows ZFS labels and only for one pool. >=20 > Think a bit about how bootloader figures out how your pool is built. > All it has access to is a raw disk and partition table. So in order to > find the pool it probes raw disk and all partitions trying to find ZFS > labels and then uses info in those labels to figure out pool > configuration. If bootloader finds stale ZFS labels left from a > previous use of the disk in some other pool, it would potentially mess > up detection of your real boot pool. >=20 > --Artem >=20 rescue-bsd# zdb -l /dev/ada0p1 -------------------------------------------- LABEL 0 -------------------------------------------- failed to read label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to read label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to read label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to read label 3 rescue-bsd# zdb -l /dev/ada0p2 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 rescue-bsd# zdb -l /dev/ada0p3 -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'tank' state: 0 txg: 1248416 pool_guid: 14109252772653171024 hostid: 1349238423 hostname: 'rescue-bsd.ovh.net' top_guid: 8826573031965252809 guid: 8826573031965252809 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 8826573031965252809 path: '/dev/gpt/zfs-root' phys_path: '/dev/gpt/zfs-root' whole_disk: 1 metaslab_array: 30 metaslab_shift: 32 ashift: 9 asize: 493660405760 is_log: 0 create_txg: 4 features_for_read: -------------------------------------------- LABEL 1 -------------------------------------------- version: 5000 name: 'tank' state: 0 txg: 1248416 pool_guid: 14109252772653171024 hostid: 1349238423 hostname: 'rescue-bsd.ovh.net' top_guid: 8826573031965252809 guid: 8826573031965252809 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 8826573031965252809 path: '/dev/gpt/zfs-root' phys_path: '/dev/gpt/zfs-root' whole_disk: 1 metaslab_array: 30 metaslab_shift: 32 ashift: 9 asize: 493660405760 is_log: 0 create_txg: 4 features_for_read: -------------------------------------------- LABEL 2 -------------------------------------------- version: 5000 name: 'tank' state: 0 txg: 1248416 pool_guid: 14109252772653171024 hostid: 1349238423 hostname: 'rescue-bsd.ovh.net' top_guid: 8826573031965252809 guid: 8826573031965252809 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 8826573031965252809 path: '/dev/gpt/zfs-root' phys_path: '/dev/gpt/zfs-root' whole_disk: 1 metaslab_array: 30 metaslab_shift: 32 ashift: 9 asize: 493660405760 is_log: 0 create_txg: 4 features_for_read: -------------------------------------------- LABEL 3 -------------------------------------------- version: 5000 name: 'tank' state: 0 txg: 1248416 pool_guid: 14109252772653171024 hostid: 1349238423 hostname: 'rescue-bsd.ovh.net' top_guid: 8826573031965252809 guid: 8826573031965252809 vdev_children: 1 vdev_tree: type: 'disk' id: 0 guid: 8826573031965252809 path: '/dev/gpt/zfs-root' phys_path: '/dev/gpt/zfs-root' whole_disk: 1 metaslab_array: 30 metaslab_shift: 32 ashift: 9 asize: 493660405760 is_log: 0 create_txg: 4 features_for_read: Since freebsd-zfs is installed on ada0p3, it's normal to get that. Then, what can you say about that? --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --tVSpilNHrmUUa2nDwWtXcodpOMPvi4kfF Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqkPjAAoJEFr01BkajbiBaukP/3TH5uZf1n+dyUChBkkWM4kt fiM8b/y7DOGwBFQQu2HWTvc5rthXkKB8vYyg3L32NRB+ntlA8Th8LrC0EeunRAcq k2NI8CNFIml4EBklQy+dtMpcSs4xwPnJLUkes8t0icQcDr7340df0Nr516f6QayS v1sKWfEgAt8yZbkt+fvSJCz20CCmGacLjI0p8K/lhBL37JSj5u/eu+sD7tIPjleS +0YVq60B0mFUMYyoTMH/jmQZZr2VJYtqfIRMFrYl2xSYEKfDbSoUkuTuAGzEaHdi cegc4tOYt+IgvaiWNMomxDj4/coh5d3LNjfXHQSikYmAVLqPDLJHZ2s/NpPQ+FaH OyXYxw+tF/7d/lFemt3Qh7gwwcEnVXoww8QEkmCLk/9i3k/NrcDTXNdZWG6KWlZh JFCH101v/8ShQPcjndxTtM4PqPK9eQTZtbyVOP0XpRz6AveT7336bM2/g8Xl3BJW UCK4CxUFnQfFl3iskW5qfmZZprcvOMdV/1Y4kLDhwa7/fd3OpKUmCOTqlLLS3xq/ viwumRg7btJzI0A+3BqQdEA+tpYYdzCcAICnJ3MUu4sC1tEU9KdNITrIWaAw6aaW ForOZIgacZBiBw1CiTE4DK0+JAKwr2pW7+XnWMLKMitja7XQXtCtDA4I4IxJItt4 fWzopUQbwuEZdHcN4Jy7 =RQ4a -----END PGP SIGNATURE----- --tVSpilNHrmUUa2nDwWtXcodpOMPvi4kfF-- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 22:35:21 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 77212BE8 for ; Thu, 12 Dec 2013 22:35:21 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 16ABC1047 for ; Thu, 12 Dec 2013 22:35:20 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAOo4qlKDaFve/2dsb2JhbABZg0JVgwO1VYE0dIIlAQEEASNWGw4KAgINBRQCWQaIDwgNsm+QGBeBKY0ACREBHDQHEgyCT4FIBIlDkAKQZINHHoE1OQ X-IronPort-AV: E=Sophos;i="4.95,474,1384318800"; d="scan'208";a="78299501" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 12 Dec 2013 17:34:11 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0AFBEB3F4A; Thu, 12 Dec 2013 17:34:11 -0500 (EST) Date: Thu, 12 Dec 2013 17:34:11 -0500 (EST) From: Rick Macklem To: Jason Keltz Message-ID: <1227422149.30131966.1386887651028.JavaMail.root@uoguelph.ca> In-Reply-To: <52AA1965.9080709@cse.yorku.ca> Subject: Re: mount ZFS snapshot on Linux system MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: FreeBSD Filesystems , Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 22:35:21 -0000 Jason Keltz wrote: > On 12/11/2013 06:21 PM, Rick Macklem wrote: > > Jason Keltz wrote: > >> On 10/12/2013 7:21 PM, Rick Macklem wrote: > >>> Jason Keltz wrote: > >>>> I'm running FreeBSD 9.2 with various ZFS datasets. > >>>> I export a dataset to a Linux system (RHEL64), and mount it. It > >>>> works > >>>> fine... > >>>> When I try to access the ZFS snapshot directory on the Linux NFS > >>>> client, > >>>> things go weird. > >>>> > >>>> With NFSv4: > >>>> > >>>> [jas@archive /]# cd /mnt/.zfs/snapshot > >>>> [jas@archive snapshot]# ls > >>>> 20131203 20131205 20131206 20131207 20131208 20131209 > >>>> 20131210 > >>>> [jas@archive snapshot]# cd 20131210 > >>>> 20131210: Not a directory. > >>>> > >>>> huh? > >>>> > >>>> [jas@archive snapshot]# ls -al > >>>> total 77 > >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >>>> drwxr-xr-x 380 root root 380 Dec 2 15:56 20131203 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131205 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131206 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131207 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131208 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131209 > >>>> drwxr-xr-x 381 root root 381 Dec 3 11:24 20131210 > >>>> [jas@archive snapshot]# stat * > >>>> [jas@archive snapshot]# ls -al > >>>> total 292 > >>>> dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >>>> dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >>>> -rw-r--r-- 1 uax guest 865 Jul 31 2009 20131205 > >>>> -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131206 > >>>> -rw-r--r-- 1 uax guest 771 Jul 31 2009 20131207 > >>>> -rw-r--r-- 1 uax guest 778 Jul 31 2009 20131208 > >>>> -rw-r--r-- 1 uax guest 5281 Jul 31 2009 20131209 > >>>> -rw------- 1 btx faculty 893 Jul 13 20:21 20131210 > >>>> > >>>> But it gets even more fun.. > >>>> > >>>> # ls -ali > >>>> total 205 > >>>> 2 dr-xr-xr-x 9 root root 9 Dec 10 11:20 . > >>>> 1 dr-xr-xr-x 4 root root 4 Nov 28 15:42 .. > >>>> 863 -rw-r--r-- 1 uax guest 137647 Mar 17 2010 20131203 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131205 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131206 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131207 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131208 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131209 > >>>> 4 drwxr-xr-x 381 root root 381 Dec 3 11:24 > >>>> 20131210 > >>>> > >>>> This is not a user id mapping issue because all the files in > >>>> /mnt > >>>> have > >>>> the proper owner/groups, and I can access them there fine. > >>>> > >>>> I also tried explicitly exporting .zfs/snapshot. The result > >>>> isn't > >>>> any > >>>> different. > >>>> > >>>> If I use nfs v3 it "works", but I'm seeing a whole lot of errors > >>>> like > >>>> these in syslog: > >>>> > >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >>>> /local/backup/home9/.zfs/snapshot/20131203: Invalid argument > >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >>>> /local/backup/home9/.zfs/snapshot/20131209: Invalid argument > >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >>>> /local/backup/home9/.zfs/snapshot/20131210: Invalid argument > >>>> Dec 10 12:32:28 jungle mountd[49579]: can't delete exports for > >>>> /local/backup/home9/.zfs/snapshot/20131207: Invalid argument > >>>> > >>>> It's not clear to me why this doesn't just "work". > >>>> > >>>> Can anyone provide any advice on debugging this? > >>>> > >>> As I think you already know, I know nothing about ZFS and never > >>> use it. > >> Yup! :) > >>> Having said that, I suspect that there are filenos (i-node #s) > >>> that are the same in the snapshot as in the parent file system > >>> tree. > >>> > >>> The basic assumptions are: > >>> - within a file system, all i-node# are unique (represent one > >>> file > >>> object only) and all file objects have the same fsid > >>> - when the fsid changes, that indicates a file system boundary > >>> and > >>> fileno (i-node#s) can be reused in the subtree with a > >>> different > >>> fsid > >>> > >>> For NFSv3, the server should export single volumes only (all > >>> objects > >>> have the same fsid and the filenos are unique). This is indicated > >>> to > >>> the VFS by the use of the NOCROSSMOUNT flag on VOP_LOOKUP() and > >>> friends. > >>> > >>> For NFSv4, the server does export multiple volumes and the > >>> boundary > >>> is indicated by a change in fsid value. > >>> > >>> I suspect ZFS snaphots don't obey the above in some way, but that > >>> is > >>> just a hunch. > >>> > >>> Now, how to narrow this down... > >>> - Do the above tests (both NFSv4 and NFSv3) and capture the > >>> packets, > >>> then look at them in wireshark. In particular, look at the > >>> fileid numbers > >>> and fsid values for the various directories under .zfs. > >> I gave this a shot, but I haven't used wireshark to capture NFS > >> traffic > >> before, so if I need to provide additional details, let me know.. > >> > >> NFSv4: > >> > >> For /mnt/.zfs/snapshot/20131203: > >> fileid=4 > >> fsid4.major=1446349656 > >> fsid4.minor=222 > >> > >> For /mnt/.zfs/snapshot/20131205: > >> fileid=4 > >> fsid4.major=1845998066 > >> fsid4.minor=222 > >> > >> For /mnt/jas: > >> fileid=144 > >> fsid4.major=597946950 > >> fsid4.minor=222 > >> > >> For /mnt/jas1: > >> fileid=338 > >> fsid4.major=597946950 > >> fsid4.minor=222 > >> > >> So fsid is the same for all the different "data" directories, > >> which > >> is > >> what I would expect given what you said. I guess each snapshot > >> is > >> seen > >> as a unique filesystem... but then a repeating inode in different > >> filesystems shouldn't be a problem... > >> > > Yes, it appears that each snapshot is represented as a different > > file > > system. As such, NFSv4 should work for these, but there is an > > additional > > property of the "root" of each of these (20131203, ...). > > When the directory .zfs/snapshot is read, the fileno for 20131203 > > should > > be different than the fileno returned by VOP_GETATTR()/stat() for > > "20131203". > > (The old "mounted-on" vs "root-of-mounted-fs" vnodes which you get > > for a > > "mount point".) > > For NFSv4, the server returns the fileno in the VOP_READDIR() > > dirent as a > > separate attribute called mounted_on_fileid vs the value returned > > by VOP_GETATTR() > > as the fileid attribute. > > If the value of these 2 attributes is the same, it is not a "mount > > point". > > > > So, maybe you could take another look at the packet capture in > > wireshark > > and see what the fileid and mounted_on_fileid attributes are? > > Unfortunately, I didn't save the log, but it was easy enough to > regenerate. > > But before we go there, I've spent a lot of time experimenting with > this, so I can say... > > If I NFSv4 mount nfs-server:/local/backup/home9 to /mnt, then I: > cd /mnt/.zfs/snapshot/20131203 > ... it works great! I can change into any user directory, list > files, etc. > If I then: > cd /mnt/.zfs/snapshot/20131205 > .. it also works great! > But... if I cd into /mnt/.zfs/snapshot, the free ride is over... > all the snapshot directories appear as files and the problem is > there. > > ... unless I unmount and remount, in which case I can repeat. > > I also found that a change of kernel from 2.6.32-358.14.1.el6 (the > kernel I was running with RHEL6.4) to 2.6.32-431.el6 (the kernel that > comes with RHEL6.5) does actually change something important.... > > If I mount nfs-server:/local/backup/home9 and try to change into > "/mnt/.zfs/snapshot" with the new kernel, I still have the problem. > Likewise, if I try to mount nfs-server:/local/backup/home9/.zfs, and > change into "/mnt/snapshot", I also have the problem. > If I mount nfs-server:/local/backup/home9/.zfs/snapshot and change > into > "/mnt", I stil have the older problem, but with the RH 6.4 kernel in > place. > However, if I do the same mount with the newer kernel, it now works. > I > can "ls" and see the snapshot directories. I can change into any of > them, then "cd .." and change into another one. > I tested this on two systems - one where I just installed the entire > 6.5 > upgrade, and the other where I just installed the kernel from 6.5 on > the > 6.4 system so it seems related to the kernel. > It's still not clear why I can't just mount > nfs-server:/local/backup/home9 on RHEL6.5, and the NFSv4 server > figures > it out. I did try from another FreeBSD client, and I can mount the > tree > at any point, and the NFS server is happy. This makes me believe > it's > probably a RHEL NFSv4 bug. > > Here's the numbers.. > > NFSv4: > > So, if I try to access the snapshot path directly, on the way ... > > .zfs: > V4 LOOKUP > fsid.major: 597946950 > fileid: 1 > fattr owner/group are root - correct > > snapshot: > V4 LOOKUP > fsid.major: 597946950 > fileid: 2 > fattr owner/group are root - correct > > If I access /.zfs/snapshot/20131203 directly...: > > 20131203: > V4 LOOKUP > fsid.major: 1446349656 > fileid: 4 > fattr owner/group are root - correct > > V4 READDIR snapshot, 20121203 entry: > fsid.major: 597946950 <-- ???? > fattr4_fileid: 863 > fattr4_owner/group refers to a group on our system (the one displayed > in > ls sometimes).. > FATTR4_MOUNTED_ON_FILEID: 0x000000000000035f > > But if I ls /mnt/.zfs/snapshot: > > V4 LOOKUP: > 201203: > fsid.major: 597946950 > fileid: 4 > > V4 READDIR: > fsid4.major: 597946950 > fattr4_fileid: 863 > fattr4_mounted_on_fileid: 0x000000000000035f > I'll admit I'm not sure what you are looking at, but the above does seem incorrect. Could you email me the raw packet capture, by any chance? (In particular, I need the packet capture for a readdir of .zfs/snapshot, so I can look at the attributes of all the entries.) Assuming the snapshots are represented as separate file systems, when you do a readdir of .zfs/snapshot, the fileid attribute and mounted_on_fileid attributes should be different. (0x35f == 863) Also, the fsid shouldn't be the same as .zfs/snapshot, which is 597946950 it seems. > >> NFSv3: > >> > >> For /mnt/.zfs/snapshot/20131203: > >> fileid=4 > >> fsid=0x0000000056358b58 > >> > >> For /mnt/.zfs/snapshot/20131205: > >> fileid=4 > >> fsid=0x000000006e07b1f2 > >> > >> For /mnt/jas > >> fileid=144 > >> fsid=0x0000000023a3f246 > >> > >> For /mnt/jas1: > >> fileid=338 > >> fsid=0x0000000023a3f246 > >> > >> Here, it seems it's the same, even though it's NFSv3... hmm. > >> > >> > >>> - Try mounting the individual snapshot directory, like > >>> .zfs/snapshot/20131209 and see if that works (for both NFSv3 > >>> and NFSv4). > >> Hmm .. I tried this: > >> > >> /local/backup/home9/.zfs/snapshot/20131203 -ro > >> archive-mrpriv.cs.yorku.ca > >> V4: / > >> > >> ... but syslog reports: > >> > >> Dec 10 22:28:22 jungle mountd[85405]: can't export > >> /local/backup/home9/.zfs/snapshot/20131203 > >> > > mountd will do a VFS_CHECKEXP(), which seems to fail for > > these (which also explains the error messages). To be honest, > > with these failing, remote access should fail. > > > > Also, since NFSv3 exported volumes should not cross > > "mount points" (anywhere the fsid changes), all a mount > > above .zfs/snapshot/20131203 should get are a bunch of > > empty directories called 20131203,... > I tried again just in case I missed something... > nfs-server:/local/backup/home9 on /mnt type nfs > (ro,vers=3,addr=172.16.2.26) > I can change into /mnt/.zfs/snapshot/20131203/jas and list the > directory, or less a file. > > > For example, if in the UFS world with a separate > > file systems /sub1 and /sub1/sub2 with both exported: > > - an NFSv3 mount of /sub1 on /mnt would see an empty > > directory "sub2" when looking in /mnt. (Actually it > > isn't necessarily empty. It might have whatever is in > > the directory when /sub1/sub2 is not mounted.) > > > > This seems pretty obviously broken for ZFS, but I think > > it needs to be fixed in ZFS and I have no idea how to do > > that, since I don`t know if snapshots are real mount points, etc. > > > >> ... and of course I can't mount from either v3/v4. > >> > >> On the other hand, I kept it as: > >> > >> /local/backup/home9 -ro archive-mrpriv.cs.yorku.ca > >> V4:/ > >> > >> ... and was able to NFSv4 mount > >> /local/backup/home9/.zfs/snapshot/20131203, and this does indeed > >> work. > >> > > Yes, although technically it should not work unless 20131203 is > > exported. > Hmm.. I thought that this line in the exports man page meant that it > was okay: > > "Because NFSv4 does not use the mount protocol, the ``administrative > controls'' are not applied. Thus, all the above export line(s) > should > be considered to have the -alldirs flag, even if the line is > specified > without it." > This means that all directories within a file system are exported. Since .zfs/snapshot/20131203 is a separate file system, it should need a separate export entry. ZFS likes to do its own thing w.r.t. exports, so I am not sure what it is actually doing w.r.t. snapshots. > > However, it is probably the easiest work around until this is fixed > > someday. > > So, just to make sure I am clear on this... > > A NFSv4 mount of the snapshot works ok, even for a Linux client > > mount. > Yes. > Although with the new kernel, I can mount > nfs-server:/local/backup/home9/.zfs/snapshot now as well... which is > neat because it solves the problem I was trying to solve.. > I wanted users to be able to view their own snapshots, but not the > snapshots of other users... > Now, on the archive server, I can mount the snapshot dir via NFSv4, > then, through autofs I am able to run a shell script that bind mounts > the users own individual snapshot directories from the NFSv4 mount > into > one directory. I then provide chrooted sftp access to that directory > for users to get at their files. A user now sees "20131203 > 20131204..." > when they sftp in.. > I can't be sure, but since you mentioned above that it is "fixed" by a dismount/remount, that would suggest it depends on what is cached in the client and might break at different times, depending what is cached? If you can do a mount like: # mount -t nfs4 nfs-server:/local/backup/home9/.zfs/snapshot/20131203 /mnt that might work reliably, although it may not be what you want. > >>> - Try doing the mounts with a FreeBSD client and see if you get > >>> the > >>> same > >>> behaviour? > >> I found this: > >> http://forums.freenas.org/threads/mounting-snapshot-directory-using-nfs-from-linux-broken.6060/ > >> .. implies it will work from FreeBSD/Nexenta, just not Linux. > > I suspect this might be the mounted_on_fileid vs fileid issue. > > (ie, The Linux client needs this to be done correctly, but the > > other > > clients figure it out.) > > > > One case that might break for FreeBSD would be to cd into a > > snapshot > > and then do a pwd with the debug.disablecwd sysctl set to 1. > > > > Hopefully the ZFS wizards are reading this, rick > Me too! > > Jason. > > From owner-freebsd-fs@FreeBSD.ORG Thu Dec 12 23:24:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 80BCC43E; Thu, 12 Dec 2013 23:24:26 +0000 (UTC) Received: from mail-vb0-x22d.google.com (mail-vb0-x22d.google.com [IPv6:2607:f8b0:400c:c02::22d]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 136411376; Thu, 12 Dec 2013 23:24:26 +0000 (UTC) Received: by mail-vb0-f45.google.com with SMTP id i12so796467vbh.18 for ; Thu, 12 Dec 2013 15:24:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=KD6b+CZv0oSRisKkgYhuwobPA5wl+NKNLO+ypJJiwX8=; b=VrFLiEFvyBgFAr8cRYIbpDQN6KnBj7X6iVxKk2PS9K719l+se5iThdjuJ4OWjSeB/f 7DNFl3/kemIPhO6cjpb8IFetTwM+pmTA2XDucUz44Xaf2iEcMd4zPsib1iOSczRrOzuW MADnyDxB/QDHZQLgTvBKe8qPJ4INgZQk2POcwioeWBokiEaFaCTm6U7nuSCpQAm1XDQt NlYtXRY1tkgcWXp8MSsp9oqWPUERZPF+aW1Zn8sUBSX5l1ytCiKpyA0uV5JubHir0Bdr QJOY9/el5u9aYOVBQYl//v1tAMLCEgLaRFlacUlFAu0ZsMH3gJNHUGvsHzJbrc4VdHnh 5BnQ== MIME-Version: 1.0 X-Received: by 10.221.60.134 with SMTP id ws6mr728955vcb.44.1386890665101; Thu, 12 Dec 2013 15:24:25 -0800 (PST) Sender: artemb@gmail.com Received: by 10.221.9.2 with HTTP; Thu, 12 Dec 2013 15:24:25 -0800 (PST) In-Reply-To: <52AA43E3.7020706@peterschmitt.fr> References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> <52AA26DA.30809@peterschmitt.fr> <52AA43E3.7020706@peterschmitt.fr> Date: Thu, 12 Dec 2013 15:24:25 -0800 X-Google-Sender-Auth: R-flaJXYaD-rgC-YUp_fdI1cGLc Message-ID: Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure From: Artem Belevich To: Florent Peterschmitt Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs , freebsd-stable stable , Andriy Gapon X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2013 23:24:26 -0000 On Thu, Dec 12, 2013 at 3:16 PM, Florent Peterschmitt wrote: > Le 12/12/2013 22:04, Artem Belevich a =E9crit : >> On Thu, Dec 12, 2013 at 1:12 PM, Florent Peterschmitt >> wrote: >>>> do "zdb -l /dev/ada0" (and all other slices on ada0) and check >>>> whether it reports anything unexpected. >>>> >>>> --Artem >>> >>> rescue-bsd# zdb -l /dev/ada0 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> failed to unpack label 0 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> failed to unpack label 1 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> failed to unpack label 2 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> failed to unpack label 3 >>> >>> >>> Well=85 this sounds bad, right? >> >> This looks the way it's supposed to -- no unwanted ZFS pool info is foun= d. >> >> Now repeat that for all ada0p? and make sure only the slice that's >> part of your pool shows ZFS labels and only for one pool. >> >> Think a bit about how bootloader figures out how your pool is built. >> All it has access to is a raw disk and partition table. So in order to >> find the pool it probes raw disk and all partitions trying to find ZFS >> labels and then uses info in those labels to figure out pool >> configuration. If bootloader finds stale ZFS labels left from a >> previous use of the disk in some other pool, it would potentially mess >> up detection of your real boot pool. >> >> --Artem >> > > rescue-bsd# zdb -l /dev/ada0p1 ...[snip]... > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version: 5000 > name: 'tank' > state: 0 > txg: 1248416 > pool_guid: 14109252772653171024 > hostid: 1349238423 > hostname: 'rescue-bsd.ovh.net' > top_guid: 8826573031965252809 > guid: 8826573031965252809 > vdev_children: 1 > vdev_tree: > type: 'disk' > id: 0 > guid: 8826573031965252809 > path: '/dev/gpt/zfs-root' > phys_path: '/dev/gpt/zfs-root' > whole_disk: 1 > metaslab_array: 30 > metaslab_shift: 32 > ashift: 9 > asize: 493660405760 > is_log: 0 > create_txg: 4 > features_for_read: > > > Since freebsd-zfs is installed on ada0p3, it's normal to get that. Then, > what can you say about that? Well, you've eliminated the possibility that there may be orphaned ZFS labels messing up with the boot. --Artem From owner-freebsd-fs@FreeBSD.ORG Fri Dec 13 05:15:44 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9280C830 for ; Fri, 13 Dec 2013 05:15:44 +0000 (UTC) Received: from smtp.nexusalpha.com (smtp.nexusalpha.com [213.48.13.50]) by mx1.freebsd.org (Postfix) with ESMTP id BED46105A for ; Fri, 13 Dec 2013 05:15:43 +0000 (UTC) Received: from [192.168.6.154] ([192.168.1.243]) by smtp.nexusalpha.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 13 Dec 2013 05:14:36 +0000 Message-ID: <52AA97B8.8060408@nexusalpha.com> Date: Fri, 13 Dec 2013 05:14:32 +0000 From: Ryan Baldwin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131029 Thunderbird/17.0.9 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: ZFS related hang with FreeBSD 9.2 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 13 Dec 2013 05:14:36.0655 (UTC) FILETIME=[377677F0:01CEF7C2] X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Dec 2013 05:15:44 -0000 Hi, We have a server based on FreeBSD 9.2 which hangs at times on a daily basis. The longest uptime we have achieved is 5 days conversely it has stopped daily several days in a row. When this occurs it appears there are two proceses stuck in 'tx->tx' state. In the top output shown these are snapshot-manager processes which create and destroy snapshots generally and sometime rollback filesystems to snapshots. When the lockup occurs other processes which try to access the file system can seem to end up stuck in state 'rrl->r'. The reboot command that was issued to try and reboot the server has ended up stuck in this state as can be seen. The server is not under particularly heavy load. It has remained in this state for hours. The 'deadman handler'? does not appear to restart the system. Once this has occurred there is no further disk activity. We did not experience this problem at all previously using 9.1 although we had less snapshot-manager processes before. We have built this server against 9.1 again now but it has only been going one day so far. We can try and reproduce this problem again on 9.2 if by doing so we can gather any additional information that could help resolve this problem. Please let me know what other information would be helpful. The hardware is a Dell R420 with Perc H310 raid controller in JBOD mode with the pool mirrored on two SAS disks. Thanks top and procstat output follow: last pid: 76225; load averages: 0.00, 0.00, 0.00 up 0+20:04:22 09:56:24 46 processes: 1 running, 44 sleeping, 1 stopped CPU: % user, % nice, % system, % interrupt, % idle Mem: 405M Active, 133M Inact, 1206M Wired, 6084M Free ARC: 797M Total, 184M MFU, 488M MRU, 28M Anon, 32M Header, 65M Other Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 1170 root 1 20 0 16244K 5460K kqread 11 1:28 0.00% CustomInit 1531 root 1 20 0 39648K 6368K tx->tx 7 1:17 0.00% snapshot-manager 1135 root 1 20 0 16244K 5100K kqread 1 1:10 0.00% CustomInit 2097 root 2 20 0 2581M 472M STOP 4 0:46 0.00% java 1183 root 1 20 0 16244K 5504K kqread 0 0:22 0.00% CustomInit 1145 root 1 20 0 16244K 5464K kqread 9 0:18 0.00% CustomInit 1444 root 1 20 0 39648K 6352K tx->tx 5 0:13 0.00% snapshot-manager 1628 root 1 20 0 39648K 6348K rrl->r 10 0:12 0.00% snapshot-manager 1168 root 1 20 0 16244K 5388K kqread 7 0:07 0.00% CustomInit 1535 root 1 20 0 22388K 14296K select 4 0:05 0.00% openvpn 1163 root 1 20 0 16244K 5388K kqread 2 0:04 0.00% CustomInit 1511 root 1 20 0 18292K 10192K select 0 0:04 0.00% openvpn 1156 root 1 20 0 16244K 5392K kqread 10 0:04 0.00% CustomInit 1174 root 1 20 0 16244K 5388K kqread 1 0:04 0.00% CustomInit 1161 root 1 20 0 16244K 5392K kqread 4 0:03 0.00% CustomInit 913 root 1 20 0 12076K 1820K select 7 0:03 0.00% syslogd 2102 root 1 20 0 109M 13616K zfsvfs 3 0:03 0.00% data-processor 1445 root 1 20 0 22388K 14296K select 5 0:02 0.00% openvpn 1014 root 1 20 0 22256K 3360K select 4 0:02 0.00% ntpd 1494 root 1 20 0 18292K 10192K select 2 0:01 0.00% openvpn 1180 root 1 20 0 18292K 4780K select 0 0:01 0.00% openvpn 1505 root 1 20 0 18292K 4712K select 2 0:01 0.00% openvpn 1495 root 1 20 0 18292K 4712K select 5 0:00 0.00% openvpn 1479 root 1 20 0 18292K 4708K select 1 0:00 0.00% openvpn 1567 root 1 20 0 18292K 4712K select 3 0:00 0.00% openvpn 1545 root 1 20 0 18292K 4712K select 1 0:00 0.00% openvpn 1486 root 1 20 0 18292K 4708K select 0 0:00 0.00% openvpn 1443 root 1 20 0 18292K 4436K select 9 0:00 0.00% openvpn 1447 root 1 20 0 18292K 4448K select 4 0:00 0.00% openvpn 1496 root 1 20 0 18292K 4708K select 3 0:00 0.00% openvpn 1478 root 1 20 0 18292K 4704K select 5 0:00 0.00% openvpn 1488 root 1 20 0 18292K 10192K select 5 0:00 0.00% openvpn 1032 root 1 20 0 14176K 1860K nanslp 1 0:00 0.00% cron 76069 root 1 20 0 9948K 1680K rrl->r 7 0:00 0.00% reboot 782 root 1 20 0 10376K 4416K select 5 0:00 0.00% devd 76196 root 1 20 0 51536K 5828K select 0 0:00 0.00% sshd 76203 root 1 20 0 51536K 5828K select 2 0:00 0.00% sshd 76205 root 1 20 0 17564K 3252K ttyin 11 0:00 0.00% csh 76198 root 1 20 0 17564K 3252K pause 8 0:00 0.00% csh 75718 root 1 20 0 35556K 3956K rrl->r 11 0:00 0.00% snapshot-manager-counts 75715 root 1 20 0 35556K 3956K rrl->r 10 0:00 0.00% snapshot-manager-counts 1181 root 1 52 0 12084K 1676K ttyin 1 0:00 0.00% getty 0 100000 kernel swapper mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 scheduler+0x359 mi_startup+0x77 btext+0x2c 0 100032 kernel firmware taskq mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100037 kernel ffs_trim taskq mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100038 kernel acpi_task_0 mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100039 kernel acpi_task_1 mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100040 kernel acpi_task_2 mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100041 kernel kqueue taskq mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100044 kernel thread taskq mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100052 kernel bge0 taskq mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100053 kernel bge1 taskq mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100062 kernel mca taskq mi_switch+0x186 sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 fork_exit+0x11f fork_trampoline+0xe 0 100063 kernel system_taskq_0 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100064 kernel system_taskq_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100065 kernel system_taskq_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100066 kernel system_taskq_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100067 kernel system_taskq_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100068 kernel system_taskq_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100069 kernel system_taskq_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100070 kernel system_taskq_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100071 kernel system_taskq_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100072 kernel system_taskq_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100073 kernel system_taskq_10 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100074 kernel system_taskq_11 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100247 kernel zio_null_issue mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100248 kernel zio_null_intr mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100249 kernel zio_read_issue_0 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100250 kernel zio_read_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100251 kernel zio_read_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100252 kernel zio_read_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100253 kernel zio_read_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100254 kernel zio_read_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100255 kernel zio_read_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100256 kernel zio_read_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100257 kernel zio_read_intr_0 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100258 kernel zio_read_intr_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100259 kernel zio_read_intr_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100260 kernel zio_read_intr_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100261 kernel zio_read_intr_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100262 kernel zio_read_intr_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100263 kernel zio_read_intr_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100264 kernel zio_read_intr_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100265 kernel zio_read_intr_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100266 kernel zio_read_intr_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100267 kernel zio_read_intr_10 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100268 kernel zio_read_intr_11 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100269 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100270 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100271 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100272 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100273 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100274 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100275 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100276 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100277 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100278 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100279 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100280 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100281 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100282 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100283 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100284 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100285 kernel zio_write_issue_ mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100286 kernel zio_write_intr_0 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100287 kernel zio_write_intr_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100288 kernel zio_write_intr_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100289 kernel zio_write_intr_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100290 kernel zio_write_intr_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100291 kernel zio_write_intr_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100292 kernel zio_write_intr_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100293 kernel zio_write_intr_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100294 kernel zio_write_intr_h mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100295 kernel zio_write_intr_h mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100296 kernel zio_write_intr_h mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100297 kernel zio_write_intr_h mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100298 kernel zio_write_intr_h mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100299 kernel zio_free_issue_0 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100300 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100301 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100302 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100303 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100304 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100305 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100306 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100307 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100308 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100309 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100310 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100311 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100312 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100313 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100314 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100315 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100316 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100317 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100318 kernel zio_free_issue_1 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100319 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100320 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100321 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100322 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100323 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100324 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100325 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100326 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100327 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100328 kernel zio_free_issue_2 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100329 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100330 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100331 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100332 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100333 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100334 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100335 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100336 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100337 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100338 kernel zio_free_issue_3 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100339 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100340 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100341 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100342 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100343 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100344 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100345 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100346 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100347 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100348 kernel zio_free_issue_4 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100349 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100350 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100351 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100352 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100353 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100354 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100355 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100356 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100357 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100358 kernel zio_free_issue_5 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100359 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100360 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100361 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100362 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100363 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100364 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100365 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100366 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100367 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100368 kernel zio_free_issue_6 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100369 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100370 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100371 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100372 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100373 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100374 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100375 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100376 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100377 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100378 kernel zio_free_issue_7 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100379 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100380 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100381 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100382 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100383 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100384 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100385 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100386 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100387 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100388 kernel zio_free_issue_8 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100389 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100390 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100391 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100392 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100393 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100394 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100395 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100396 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100397 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100398 kernel zio_free_issue_9 mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100399 kernel zio_free_intr mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100400 kernel zio_claim_issue mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100401 kernel zio_claim_intr mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100402 kernel zio_ioctl_issue mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100403 kernel zio_ioctl_intr mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100405 kernel zfs_vn_rele_task mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100418 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100441 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100477 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100478 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100479 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100480 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100481 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100482 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100484 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100486 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100488 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100489 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100490 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100491 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100492 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100493 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100494 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100495 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100496 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100497 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100498 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100499 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100636 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 0 100638 kernel zil_clean mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f fork_trampoline+0xe 1 100002 init - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_wait6+0x8c3 kern_wait+0x9c sys_wait4+0x35 amd64_syscall+0x540 Xfast_syscall+0xf7 2 100033 crypto - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 crypto_proc+0x197 fork_exit+0x11f fork_trampoline+0xe 3 100034 crypto returns - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 crypto_ret_proc+0x192 fork_exit+0x11f fork_trampoline+0xe 4 100061 ctl_thrd - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 ctl_work_thread+0x2fd0 fork_exit+0x11f fork_trampoline+0xe 5 100075 zfskern arc_reclaim_thre mi_switch+0x186 sleepq_timedwait+0x42 _cv_timedwait+0x135 arc_reclaim_thread+0x29d fork_exit+0x11f fork_trampoline+0xe 5 100076 zfskern l2arc_feed_threa mi_switch+0x186 sleepq_timedwait+0x42 _cv_timedwait+0x135 l2arc_feed_thread+0x1a2 fork_exit+0x11f fork_trampoline+0xe 5 100404 zfskern trim nsgroot mi_switch+0x186 sleepq_timedwait+0x42 _cv_timedwait+0x135 trim_thread+0x68 fork_exit+0x11f fork_trampoline+0xe 5 100406 zfskern txg_thread_enter mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 txg_thread_wait+0x79 txg_quiesce_thread+0xbb fork_exit+0x11f fork_trampoline+0xe 5 100407 zfskern txg_thread_enter mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_write+0x35 dsl_sync_task_sync+0xb8 dsl_pool_sync+0x47d spa_sync+0x3ba txg_sync_thread+0x139 fork_exit+0x11f fork_trampoline+0xe 5 100408 zfskern zvol nsgroot/swa mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 zvol_geom_worker+0xfc fork_exit+0x11f fork_trampoline+0xe 6 100077 sctp_iterator - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 sctp_iterator_thread+0x41 fork_exit+0x11f fork_trampoline+0xe 7 100078 xpt_thrd - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 xpt_scanner_thread+0xff fork_exit+0x11f fork_trampoline+0xe 8 100079 ipmi0: kcs - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 ipmi_dequeue_request+0x47 kcs_loop+0x3d fork_exit+0x11f fork_trampoline+0xe 9 100080 enc_daemon0 - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 enc_daemon+0xe4 fork_exit+0x11f fork_trampoline+0xe 10 100001 audit - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 audit_worker+0x359 fork_exit+0x11f fork_trampoline+0xe 11 100003 idle idle: cpu0 11 100004 idle idle: cpu1 11 100005 idle idle: cpu2 11 100006 idle idle: cpu3 11 100007 idle idle: cpu4 11 100008 idle idle: cpu5 11 100009 idle idle: cpu6 11 100010 idle idle: cpu7 11 100011 idle idle: cpu8 11 100012 idle idle: cpu9 11 100013 idle idle: cpu10 mi_switch+0x186 critical_exit+0xa5 sched_idletd+0x118 fork_exit+0x11f fork_trampoline+0xe 11 100014 idle idle: cpu11 12 100015 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100016 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100017 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100018 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100019 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100020 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100021 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100022 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100023 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100024 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100025 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100026 intr swi4: clock mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100027 intr swi3: vm 12 100028 intr swi1: netisr 0 mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100036 intr swi6: task queue mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100042 intr swi2: cambio mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100043 intr swi5: fast taskq 12 100045 intr swi6: Giant task mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100046 intr irq264: mfi0 mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100047 intr irq23: ehci0 mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100054 intr irq22: ehci1 mi_switch+0x186 ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe 12 100059 intr irq267: ahci0 12 100060 intr swi0: uart uart 13 100029 geom g_event mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 g_run_events+0x440 fork_exit+0x11f fork_trampoline+0xe 13 100030 geom g_up mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 g_io_schedule_up+0xe6 g_up_procbody+0x5c fork_exit+0x11f fork_trampoline+0xe 13 100031 geom g_down mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 g_io_schedule_down+0x25f g_down_procbody+0x5c fork_exit+0x11f fork_trampoline+0xe 14 100035 yarrow - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 random_kthread+0x1ea fork_exit+0x11f fork_trampoline+0xe 15 100048 usb usbus0 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100049 usb usbus0 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100050 usb usbus0 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100051 usb usbus0 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100055 usb usbus1 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100056 usb usbus1 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100057 usb usbus1 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 15 100058 usb usbus1 mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f fork_trampoline+0xe 16 100081 pagedaemon - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 vm_pageout+0xb34 fork_exit+0x11f fork_trampoline+0xe 17 100082 vmdaemon - mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 vm_daemon+0x58 fork_exit+0x11f fork_trampoline+0xe 18 100083 pagezero - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 vm_pagezero+0x83 fork_exit+0x11f fork_trampoline+0xe 19 100084 bufdaemon - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 buf_daemon+0x1e1 fork_exit+0x11f fork_trampoline+0xe 20 100085 syncer - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e sync_fsync+0x1a2 VOP_FSYNC_APV+0x68 sync_vnode+0x16b sched_sync+0x1c5 fork_exit+0x11f fork_trampoline+0xe 21 100086 vnlru - mi_switch+0x186 sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 vnlru_proc+0x61e fork_exit+0x11f fork_trampoline+0xe 22 100087 softdepflush - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 softdep_flush+0x375 fork_exit+0x11f fork_trampoline+0xe 782 100432 devd - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 790 100545 pfpurge - mi_switch+0x186 sleepq_timedwait+0x42 _sleep+0x1c9 pf_purge_thread+0x31 fork_exit+0x11f fork_trampoline+0xe 913 100449 syslogd - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 1014 100581 ntpd - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 1028 100532 sshd - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 1032 100473 cron - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _sleep+0x2ca kern_nanosleep+0x118 sys_nanosleep+0x6e amd64_syscall+0x540 Xfast_syscall+0xf7 1135 100558 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1145 100509 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1156 100508 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1161 100507 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1163 100521 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1168 100574 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1170 100561 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1174 100560 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1180 100420 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1181 100463 getty - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 Xfast_syscall+0xf7 1182 100430 getty - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 Xfast_syscall+0xf7 1183 100421 CustomInit - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 1443 100599 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1444 100527 snapshot-manager - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 dsl_destroy_snapshots_nvl+0x71 dsl_destroy_snapshot+0x4a zfs_ioc_destroy+0x3e zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 1445 100514 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1447 100557 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1478 100555 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1479 100551 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1486 100576 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1488 100563 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1494 100575 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1495 100442 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1496 100556 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1505 100431 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1511 100475 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1531 100429 snapshot-manager - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 zfs_ioc_rollback+0xb8 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 1535 100434 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1545 100459 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1567 100539 openvpn - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 Xfast_syscall+0xf7 1628 100454 snapshot-manager - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 2097 100710 java - mi_switch+0x186 thread_suspend_switch+0xcd thread_single+0x1b2 exit1+0x72 sys_sys_exit+0xe amd64_syscall+0x540 Xfast_syscall+0xf7 2097 100792 java - mi_switch+0x186 sleepq_wait+0x42 __lockmgr_args+0x5cb vop_stdlock+0x39 VOP_LOCK1_APV+0x70 _vn_lock+0x47 zfsctl_freebsd_root_lookup+0xd5 VOP_LOOKUP_APV+0x62 lookup+0x437 namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a amd64_syscall+0x540 Xfast_syscall+0xf7 2102 100635 data-processor dm_siri_to_chiro mi_switch+0x186 sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 getnewvnode+0x27d gfs_file_create+0x4b gfs_dir_create+0x16 zfsctl_snapdir_lookup+0x474 VOP_LOOKUP_APV+0x62 lookup+0x437 namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a 75715 100424 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 75718 100861 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 75719 100578 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 76069 101690 reboot - mi_switch+0x186 sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e sys_sync+0x1e8 amd64_syscall+0x540 Xfast_syscall+0xf7 76196 100847 sshd - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 76198 101378 csh - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 Xfast_syscall+0xf7 76200 100892 top - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 Xfast_syscall+0xf7 76203 100447 sshd - 76205 101410 csh - mi_switch+0x186 sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 Xfast_syscall+0xf7 76209 101030 procstat - From owner-freebsd-fs@FreeBSD.ORG Fri Dec 13 09:05:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id AC767DB4; Fri, 13 Dec 2013 09:05:53 +0000 (UTC) Received: from smtp.peterschmitt.fr (smtp.peterschmitt.fr [IPv6:2a01:4f8:a0:72c8:4224::3]) by mx1.freebsd.org (Postfix) with ESMTP id 6B866147D; Fri, 13 Dec 2013 09:05:53 +0000 (UTC) Received: from [192.168.0.170] (unknown [82.226.113.5]) by smtp.peterschmitt.fr (Postfix) with ESMTPSA id 62669627F9; Fri, 13 Dec 2013 10:06:14 +0100 (CET) Message-ID: <52AACDDB.3010500@peterschmitt.fr> Date: Fri, 13 Dec 2013 10:05:31 +0100 From: Florent Peterschmitt User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131103 Icedove/17.0.10 MIME-Version: 1.0 To: Martin Simmons Subject: Re: 10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure References: <52A6EB67.3000103@peterschmitt.fr> <52A99917.2050200@FreeBSD.org> <52A9AA45.2000907@peterschmitt.fr> <52A9ABEF.8080509@FreeBSD.org> <52A9AD9C.2090200@peterschmitt.fr> <201312121540.rBCFePGB013820@higson.cam.lispworks.com> In-Reply-To: <201312121540.rBCFePGB013820@higson.cam.lispworks.com> X-Enigmail-Version: 1.6 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="dGtrwfIJQn7k1lx7jEUEBQR4bGkeujvAq" Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org, avg@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Dec 2013 09:05:53 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --dGtrwfIJQn7k1lx7jEUEBQR4bGkeujvAq Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Le 12/12/2013 16:40, Martin Simmons a =C3=A9crit : > Did you rerun the gpart bootcode command after installing FreeBSD 10? = If not, > maybe the 9.2 bootcode can't handle the upgraded pool? If you did reru= n it, > check that /boot/gptzfsboot doesn't exceed the size of the partition (y= our > zfs.sh uses -s 128 =3D 64k). I've upgraded the bootcode, and the size of it is 40k so=E2=80=A6 --=20 Florent Peterschmitt | Please: florent@peterschmitt.fr | * Avoid HTML/RTF in E-mail. +33 (0)6 64 33 97 92 | * Send PDF for documents. http://florent.peterschmitt.fr | * Trim your quotations. Really. Proudly powered by Open Source | Thank you :) --dGtrwfIJQn7k1lx7jEUEBQR4bGkeujvAq Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.15 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBAgAGBQJSqs3gAAoJEFr01BkajbiBxSoP/1ZxI5ttn/ObNRISZGuvbrN+ w116+j4M1SExWenBi/MSDufWtil7/+nPmMeoKE8MMSwRtt4nRp8T9uTEvylmo5n6 AQkARWGOiaH9yMREn5fPvIJnYNRxO5O9q9WBC7BHfM6gRtYYaVCmPCgAp6QumORw CgfPrhivTmyNjYqcoev0IXWNTbqRUj4e14T2YZ9FZmEakUIgNpwgo5blG61asyRV 86JIiRBzYxz4DTBX5jUpBq+0rfD2GaCQogHF9CQTHWgK5+T9b3z0PXCJwcEBtHn8 5NsjhhiOX5C+1ePKugb/fUe3RG6/SAqNqsQqjS87OS5uE3VAClyNWauhwZxD2K/z yWZJwwLYint0U9dtgQIZ5sL5pZ4MXgh74p9zbt37nxTlUbPG9TDKFKEZXEiTczvR uQVqIhluO8kGdQDXezAFGZ8ViWoRdj7j58ll7Qyexr2hkVGpG0Aikoy31Btf9GwS wa3+XAcmItnICWhNm0x1AtvvsyecmH1QwTwMtnAuDZ9ZX1ZEy+s6Hiae7yZop/5d YttqVS5ARHkpHw775FeSYBqhfTskvtQufUhIvcgXcywjkhkST+vHSKfG5z/Zrkn7 /sHXv5NvQHP4WRRBKVKrrZ7uVQIpjvuRblVupoMBRaODmvkcZal51kqQPorl+uR2 m9uS/xvqD2s7pa5C1Eda =aAEu -----END PGP SIGNATURE----- --dGtrwfIJQn7k1lx7jEUEBQR4bGkeujvAq-- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 13 23:21:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0B7F2901 for ; Fri, 13 Dec 2013 23:21:11 +0000 (UTC) Received: from mail-we0-x22c.google.com (mail-we0-x22c.google.com [IPv6:2a00:1450:400c:c03::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 25E6018F4 for ; Fri, 13 Dec 2013 23:21:10 +0000 (UTC) Received: by mail-we0-f172.google.com with SMTP id w62so2547134wes.3 for ; Fri, 13 Dec 2013 15:21:08 -0800 (PST) X-Received: by 10.194.189.132 with SMTP id gi4mr4094554wjc.5.1386976868441; Fri, 13 Dec 2013 15:21:08 -0800 (PST) Received: from mail-we0-x232.google.com (mail-we0-x232.google.com [2a00:1450:400c:c03::232]) by mx.google.com with ESMTPSA id uc18sm1934443wib.11.2013.12.13.15.21.07 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 15:21:07 -0800 (PST) Received: by mail-we0-f178.google.com with SMTP id u57so2541543wes.9 for ; Fri, 13 Dec 2013 15:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=qKOfuhwJvUvhcPKfF9CWBzaP078plfVfBXDe/YxUin8=; b=YGdIQ1rFHETvgwHvU+RhRxLvV6ZJo1o+kRXo5NpqqW3PgjrJ37MXi6kWFRsvulRsPI 1wTpCQUjZUkcPtxtG/+f7db4PRRg2phYIhDfR/h1nLPeOf57qU7yX3LxXuatsjHlhFW5 ZqjZ0CPjzNdnos/JFsUhTyo8mogVeyhhUjaxphhyONJhV408axojM7wW+Piyp1IXZgQo PnRLsp+gMDFJt4jMtlBNsTEBena2ZKompYdItSVilOI/f/y5Z8XiHq0Bs/gmiYQgwO/Y XDnLP+gIjDCnbO42BOy97NsXJzkUG432PAEibFXmxqFPCn90PcT+uVJg8PLJve7TSHiA dsxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=qKOfuhwJvUvhcPKfF9CWBzaP078plfVfBXDe/YxUin8=; b=SNo1zfGs442PHdOw30ELNzVSwxNerktOpg1BNdHNQDX9jhlRbJeqICPSwVv0ERJRdn QW6N6zb80cxfLJfHtovkghAseUAQ98ffMNDrVcDFlu1MgJlowpeDaN1audAW/X1HL79C njeamQAcWbKgbsQ0v9opJ2tdMZD3I0Btn7G//R9pW6UdTNTD7UFmoJnqnmvKnUn2M4ZG F7qHJyAClFj548kT3e/hIAa8HoQK7zfEKKx0sdJ0VSwXIhD3gtpweKmXS4Txtd8oVKXg f2JEriwXxUbk/jdQF8aoVbHzcQ4No4OS/LY3k2U0/FuPffj0b0oU5yI99P6b9+kvBAPj Wa6Q== X-Gm-Message-State: ALoCoQm+uOKhSdWIdZY7v0TDtnlY/ew9AsaZ6NauJTOWTnXvmh8jS5C4WEh+CXPesovwYYa/QV/F MIME-Version: 1.0 X-Received: by 10.194.10.34 with SMTP id f2mr4030840wjb.77.1386976866706; Fri, 13 Dec 2013 15:21:06 -0800 (PST) Received: by 10.194.166.100 with HTTP; Fri, 13 Dec 2013 15:21:06 -0800 (PST) In-Reply-To: <52AA97B8.8060408@nexusalpha.com> References: <52AA97B8.8060408@nexusalpha.com> Date: Fri, 13 Dec 2013 18:21:06 -0500 Message-ID: Subject: Re: ZFS related hang with FreeBSD 9.2 From: Rod Taylor To: Ryan Baldwin Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Dec 2013 23:21:11 -0000 Are you using snapshots? I've found ZFS Snapshots on 9.0, 9.1, and 9.2 regularly crash the system. Delete the snapshots and don't create any new ones and suddenly it's stable for months. On Fri, Dec 13, 2013 at 12:14 AM, Ryan Baldwin wrote: > Hi, > > We have a server based on FreeBSD 9.2 which hangs at times on a daily > basis. The longest uptime we have achieved is 5 days conversely it has > stopped daily several days in a row. > > When this occurs it appears there are two proceses stuck in 'tx->tx' > state. In the top output shown these are snapshot-manager processes which > create and destroy snapshots generally and sometime rollback filesystems to > snapshots. When the lockup occurs other processes which try to access the > file system can seem to end up stuck in state 'rrl->r'. The reboot command > that was issued to try and reboot the server has ended up stuck in this > state as can be seen. > > The server is not under particularly heavy load. > > It has remained in this state for hours. The 'deadman handler'? does not > appear to restart the system. Once this has occurred there is no further > disk activity. > > We did not experience this problem at all previously using 9.1 although we > had less snapshot-manager processes before. We have built this server > against 9.1 again now but it has only been going one day so far. > > We can try and reproduce this problem again on 9.2 if by doing so we can > gather any additional information that could help resolve this problem. > Please let me know what other information would be helpful. > > The hardware is a Dell R420 with Perc H310 raid controller in JBOD mode > with the pool mirrored on two SAS disks. > > Thanks > > top and procstat output follow: > > > last pid: 76225; load averages: 0.00, 0.00, 0.00 up 0+20:04:22 > 09:56:24 > 46 processes: 1 running, 44 sleeping, 1 stopped > CPU: % user, % nice, % system, % interrupt, % idle > Mem: 405M Active, 133M Inact, 1206M Wired, 6084M Free > ARC: 797M Total, 184M MFU, 488M MRU, 28M Anon, 32M Header, 65M Other > Swap: 8192M Total, 8192M Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 1170 root 1 20 0 16244K 5460K kqread 11 1:28 0.00% > CustomInit > 1531 root 1 20 0 39648K 6368K tx->tx 7 1:17 0.00% > snapshot-manager > 1135 root 1 20 0 16244K 5100K kqread 1 1:10 0.00% > CustomInit > 2097 root 2 20 0 2581M 472M STOP 4 0:46 0.00% java > 1183 root 1 20 0 16244K 5504K kqread 0 0:22 0.00% > CustomInit > 1145 root 1 20 0 16244K 5464K kqread 9 0:18 0.00% > CustomInit > 1444 root 1 20 0 39648K 6352K tx->tx 5 0:13 0.00% > snapshot-manager > 1628 root 1 20 0 39648K 6348K rrl->r 10 0:12 0.00% > snapshot-manager > 1168 root 1 20 0 16244K 5388K kqread 7 0:07 0.00% > CustomInit > 1535 root 1 20 0 22388K 14296K select 4 0:05 0.00% > openvpn > 1163 root 1 20 0 16244K 5388K kqread 2 0:04 0.00% > CustomInit > 1511 root 1 20 0 18292K 10192K select 0 0:04 0.00% > openvpn > 1156 root 1 20 0 16244K 5392K kqread 10 0:04 0.00% > CustomInit > 1174 root 1 20 0 16244K 5388K kqread 1 0:04 0.00% > CustomInit > 1161 root 1 20 0 16244K 5392K kqread 4 0:03 0.00% > CustomInit > 913 root 1 20 0 12076K 1820K select 7 0:03 0.00% > syslogd > 2102 root 1 20 0 109M 13616K zfsvfs 3 0:03 0.00% > data-processor > 1445 root 1 20 0 22388K 14296K select 5 0:02 0.00% > openvpn > 1014 root 1 20 0 22256K 3360K select 4 0:02 0.00% ntpd > 1494 root 1 20 0 18292K 10192K select 2 0:01 0.00% > openvpn > 1180 root 1 20 0 18292K 4780K select 0 0:01 0.00% > openvpn > 1505 root 1 20 0 18292K 4712K select 2 0:01 0.00% > openvpn > 1495 root 1 20 0 18292K 4712K select 5 0:00 0.00% > openvpn > 1479 root 1 20 0 18292K 4708K select 1 0:00 0.00% > openvpn > 1567 root 1 20 0 18292K 4712K select 3 0:00 0.00% > openvpn > 1545 root 1 20 0 18292K 4712K select 1 0:00 0.00% > openvpn > 1486 root 1 20 0 18292K 4708K select 0 0:00 0.00% > openvpn > 1443 root 1 20 0 18292K 4436K select 9 0:00 0.00% > openvpn > 1447 root 1 20 0 18292K 4448K select 4 0:00 0.00% > openvpn > 1496 root 1 20 0 18292K 4708K select 3 0:00 0.00% > openvpn > 1478 root 1 20 0 18292K 4704K select 5 0:00 0.00% > openvpn > 1488 root 1 20 0 18292K 10192K select 5 0:00 0.00% > openvpn > 1032 root 1 20 0 14176K 1860K nanslp 1 0:00 0.00% cron > 76069 root 1 20 0 9948K 1680K rrl->r 7 0:00 0.00% reboot > 782 root 1 20 0 10376K 4416K select 5 0:00 0.00% devd > 76196 root 1 20 0 51536K 5828K select 0 0:00 0.00% sshd > 76203 root 1 20 0 51536K 5828K select 2 0:00 0.00% sshd > 76205 root 1 20 0 17564K 3252K ttyin 11 0:00 0.00% csh > 76198 root 1 20 0 17564K 3252K pause 8 0:00 0.00% csh > 75718 root 1 20 0 35556K 3956K rrl->r 11 0:00 0.00% > snapshot-manager-counts > 75715 root 1 20 0 35556K 3956K rrl->r 10 0:00 0.00% > snapshot-manager-counts > 1181 root 1 52 0 12084K 1676K ttyin 1 0:00 0.00% getty > > > 0 100000 kernel swapper mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 scheduler+0x359 mi_startup+0x77 > btext+0x2c > 0 100032 kernel firmware taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100037 kernel ffs_trim taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100038 kernel acpi_task_0 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100039 kernel acpi_task_1 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100040 kernel acpi_task_2 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100041 kernel kqueue taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100044 kernel thread taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100052 kernel bge0 taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100053 kernel bge1 taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100062 kernel mca taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100063 kernel system_taskq_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100064 kernel system_taskq_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100065 kernel system_taskq_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100066 kernel system_taskq_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100067 kernel system_taskq_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100068 kernel system_taskq_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100069 kernel system_taskq_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100070 kernel system_taskq_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100071 kernel system_taskq_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100072 kernel system_taskq_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100073 kernel system_taskq_10 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100074 kernel system_taskq_11 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100247 kernel zio_null_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100248 kernel zio_null_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100249 kernel zio_read_issue_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100250 kernel zio_read_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100251 kernel zio_read_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100252 kernel zio_read_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100253 kernel zio_read_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100254 kernel zio_read_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100255 kernel zio_read_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100256 kernel zio_read_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100257 kernel zio_read_intr_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100258 kernel zio_read_intr_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100259 kernel zio_read_intr_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100260 kernel zio_read_intr_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100261 kernel zio_read_intr_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100262 kernel zio_read_intr_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100263 kernel zio_read_intr_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100264 kernel zio_read_intr_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100265 kernel zio_read_intr_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100266 kernel zio_read_intr_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100267 kernel zio_read_intr_10 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100268 kernel zio_read_intr_11 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100269 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100270 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100271 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100272 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100273 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100274 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100275 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100276 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100277 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100278 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100279 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100280 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100281 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100282 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100283 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100284 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100285 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100286 kernel zio_write_intr_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100287 kernel zio_write_intr_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100288 kernel zio_write_intr_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100289 kernel zio_write_intr_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100290 kernel zio_write_intr_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100291 kernel zio_write_intr_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100292 kernel zio_write_intr_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100293 kernel zio_write_intr_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100294 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100295 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100296 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100297 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100298 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100299 kernel zio_free_issue_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100300 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100301 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100302 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100303 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100304 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100305 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100306 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100307 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100308 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100309 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100310 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100311 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100312 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100313 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100314 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100315 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100316 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100317 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100318 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100319 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100320 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100321 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100322 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100323 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100324 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100325 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100326 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100327 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100328 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100329 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100330 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100331 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100332 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100333 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100334 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100335 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100336 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100337 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100338 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100339 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100340 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100341 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100342 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100343 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100344 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100345 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100346 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100347 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100348 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100349 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100350 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100351 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100352 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100353 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100354 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100355 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100356 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100357 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100358 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100359 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100360 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100361 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100362 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100363 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100364 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100365 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100366 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100367 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100368 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100369 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100370 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100371 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100372 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100373 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100374 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100375 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100376 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100377 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100378 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100379 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100380 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100381 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100382 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100383 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100384 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100385 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100386 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100387 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100388 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100389 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100390 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100391 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100392 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100393 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100394 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100395 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100396 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100397 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100398 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100399 kernel zio_free_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100400 kernel zio_claim_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100401 kernel zio_claim_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100402 kernel zio_ioctl_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100403 kernel zio_ioctl_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100405 kernel zfs_vn_rele_task mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100418 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100441 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100477 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100478 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100479 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100480 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100481 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100482 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100484 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100486 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100488 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100489 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100490 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100491 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100492 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100493 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100494 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100495 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100496 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100497 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100498 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100499 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100636 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 0 100638 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f > fork_trampoline+0xe > 1 100002 init - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_wait6+0x8c3 kern_wait+0x9c sys_wait4+0x35 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 2 100033 crypto - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 crypto_proc+0x197 fork_exit+0x11f > fork_trampoline+0xe > 3 100034 crypto returns - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 crypto_ret_proc+0x192 fork_exit+0x11f > fork_trampoline+0xe > 4 100061 ctl_thrd - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 ctl_work_thread+0x2fd0 fork_exit+0x11f > fork_trampoline+0xe > 5 100075 zfskern arc_reclaim_thre mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 arc_reclaim_thread+0x29d > fork_exit+0x11f fork_trampoline+0xe > 5 100076 zfskern l2arc_feed_threa mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 l2arc_feed_thread+0x1a2 > fork_exit+0x11f fork_trampoline+0xe > 5 100404 zfskern trim nsgroot mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 trim_thread+0x68 fork_exit+0x11f > fork_trampoline+0xe > 5 100406 zfskern txg_thread_enter mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_thread_wait+0x79 > txg_quiesce_thread+0xbb fork_exit+0x11f fork_trampoline+0xe > 5 100407 zfskern txg_thread_enter mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_write+0x35 > dsl_sync_task_sync+0xb8 dsl_pool_sync+0x47d spa_sync+0x3ba > txg_sync_thread+0x139 fork_exit+0x11f fork_trampoline+0xe > 5 100408 zfskern zvol nsgroot/swa mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 zvol_geom_worker+0xfc fork_exit+0x11f > fork_trampoline+0xe > 6 100077 sctp_iterator - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 sctp_iterator_thread+0x41 fork_exit+0x11f > fork_trampoline+0xe > 7 100078 xpt_thrd - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 xpt_scanner_thread+0xff fork_exit+0x11f > fork_trampoline+0xe > 8 100079 ipmi0: kcs - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 ipmi_dequeue_request+0x47 kcs_loop+0x3d > fork_exit+0x11f fork_trampoline+0xe > 9 100080 enc_daemon0 - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 enc_daemon+0xe4 fork_exit+0x11f > fork_trampoline+0xe > 10 100001 audit - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 audit_worker+0x359 fork_exit+0x11f > fork_trampoline+0xe > 11 100003 idle idle: cpu0 > 11 100004 idle idle: cpu1 > 11 100005 idle idle: cpu2 > 11 100006 idle idle: cpu3 > 11 100007 idle idle: cpu4 > 11 100008 idle idle: cpu5 > 11 100009 idle idle: cpu6 > 11 100010 idle idle: cpu7 > 11 100011 idle idle: cpu8 > 11 100012 idle idle: cpu9 > 11 100013 idle idle: cpu10 mi_switch+0x186 > critical_exit+0xa5 sched_idletd+0x118 fork_exit+0x11f fork_trampoline+0xe > 11 100014 idle idle: cpu11 > 12 100015 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100016 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100017 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100018 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100019 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100020 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100021 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100022 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100023 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100024 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100025 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100026 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100027 intr swi3: vm > 12 100028 intr swi1: netisr 0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100036 intr swi6: task queue mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100042 intr swi2: cambio mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100043 intr swi5: fast taskq > 12 100045 intr swi6: Giant task mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100046 intr irq264: mfi0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100047 intr irq23: ehci0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100054 intr irq22: ehci1 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100059 intr irq267: ahci0 > 12 100060 intr swi0: uart uart > 13 100029 geom g_event mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 g_run_events+0x440 fork_exit+0x11f > fork_trampoline+0xe > 13 100030 geom g_up mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 g_io_schedule_up+0xe6 g_up_procbody+0x5c > fork_exit+0x11f fork_trampoline+0xe > 13 100031 geom g_down mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 g_io_schedule_down+0x25f g_down_procbody+0x5c > fork_exit+0x11f fork_trampoline+0xe > 14 100035 yarrow - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 random_kthread+0x1ea fork_exit+0x11f > fork_trampoline+0xe > 15 100048 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100049 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100050 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100051 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100055 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100056 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100057 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100058 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 16 100081 pagedaemon - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 vm_pageout+0xb34 fork_exit+0x11f > fork_trampoline+0xe > 17 100082 vmdaemon - mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 vm_daemon+0x58 fork_exit+0x11f > fork_trampoline+0xe > 18 100083 pagezero - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 vm_pagezero+0x83 fork_exit+0x11f > fork_trampoline+0xe > 19 100084 bufdaemon - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 buf_daemon+0x1e1 fork_exit+0x11f > fork_trampoline+0xe > 20 100085 syncer - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e > sync_fsync+0x1a2 VOP_FSYNC_APV+0x68 sync_vnode+0x16b sched_sync+0x1c5 > fork_exit+0x11f fork_trampoline+0xe > 21 100086 vnlru - mi_switch+0x186 > sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 > zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 > vnlru_proc+0x61e fork_exit+0x11f fork_trampoline+0xe > 22 100087 softdepflush - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 softdep_flush+0x375 fork_exit+0x11f > fork_trampoline+0xe > 782 100432 devd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d kern_select+0x6ef sys_select+0x5d > amd64_syscall+0x540 Xfast_syscall+0xf7 > 790 100545 pfpurge - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 pf_purge_thread+0x31 fork_exit+0x11f > fork_trampoline+0xe > 913 100449 syslogd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1014 100581 ntpd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1028 100532 sshd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1032 100473 cron - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _sleep+0x2ca > kern_nanosleep+0x118 sys_nanosleep+0x6e amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1135 100558 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1145 100509 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1156 100508 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1161 100507 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1163 100521 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1168 100574 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1170 100561 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1174 100560 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1180 100420 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1181 100463 getty - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 > dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1182 100430 getty - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 > dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1183 100421 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 > 1443 100599 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1444 100527 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 > dsl_destroy_snapshots_nvl+0x71 dsl_destroy_snapshot+0x4a > zfs_ioc_destroy+0x3e zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 > sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 1445 100514 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1447 100557 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1478 100555 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1479 100551 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1486 100576 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1488 100563 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1494 100575 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1495 100442 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1496 100556 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1505 100431 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1511 100475 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1531 100429 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 > zfs_ioc_rollback+0xb8 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b > kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 1535 100434 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1545 100459 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1567 100539 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1628 100454 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 > dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d > devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 > Xfast_syscall+0xf7 > 2097 100710 java - mi_switch+0x186 > thread_suspend_switch+0xcd thread_single+0x1b2 exit1+0x72 sys_sys_exit+0xe > amd64_syscall+0x540 Xfast_syscall+0xf7 > 2097 100792 java - mi_switch+0x186 > sleepq_wait+0x42 __lockmgr_args+0x5cb vop_stdlock+0x39 VOP_LOCK1_APV+0x70 > _vn_lock+0x47 zfsctl_freebsd_root_lookup+0xd5 VOP_LOOKUP_APV+0x62 > lookup+0x437 namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 > sys_stat+0x2a amd64_syscall+0x540 Xfast_syscall+0xf7 > 2102 100635 data-processor dm_siri_to_chiro mi_switch+0x186 > sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 > zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 > getnewvnode+0x27d gfs_file_create+0x4b gfs_dir_create+0x16 > zfsctl_snapdir_lookup+0x474 VOP_LOOKUP_APV+0x62 lookup+0x437 namei+0x4ac > kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a > 75715 100424 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a > zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b > kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 75718 100861 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a > zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b > kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 75719 100578 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a > zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b > kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 76069 101690 reboot - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e > sys_sync+0x1e8 amd64_syscall+0x540 Xfast_syscall+0xf7 > 76196 100847 sshd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76198 101378 csh - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76200 100892 top - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 > dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76203 100447 sshd - > 76205 101410 csh - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76209 101030 procstat - > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Sat Dec 14 00:55:43 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 65490543 for ; Sat, 14 Dec 2013 00:55:43 +0000 (UTC) Received: from mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0951310DE for ; Sat, 14 Dec 2013 00:55:41 +0000 (UTC) Received: from r2d2 ([82.69.179.241]) by mail1.multiplay.co.uk (mail1.multiplay.co.uk [85.236.96.23]) (MDaemon PRO v10.0.4) with ESMTP id md50007031124.msg for ; Sat, 14 Dec 2013 00:55:32 +0000 X-Spam-Processed: mail1.multiplay.co.uk, Sat, 14 Dec 2013 00:55:32 +0000 (not processed: message from valid local sender) X-MDDKIM-Result: neutral (mail1.multiplay.co.uk) X-MDRemoteIP: 82.69.179.241 X-Return-Path: prvs=1060ac9e12=killing@multiplay.co.uk X-Envelope-From: killing@multiplay.co.uk X-MDaemon-Deliver-To: freebsd-fs@freebsd.org Message-ID: From: "Steven Hartland" To: "Rod Taylor" , "Ryan Baldwin" References: <52AA97B8.8060408@nexusalpha.com> Subject: Re: ZFS related hang with FreeBSD 9.2 Date: Sat, 14 Dec 2013 00:55:27 -0000 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Dec 2013 00:55:43 -0000 Are you doing any snapshot sends as well as interacting with snapshots such as listing files in them via the .zfs? If so make sure you have the following patch applied as that can cause a deadlock between these two operations http://svnweb.freebsd.org/changeset/base/258595 Regards Steve ----- Original Message ----- From: "Rod Taylor" To: "Ryan Baldwin" Cc: Sent: Friday, December 13, 2013 11:21 PM Subject: Re: ZFS related hang with FreeBSD 9.2 > Are you using snapshots? > > I've found ZFS Snapshots on 9.0, 9.1, and 9.2 regularly crash the system. > Delete the snapshots and don't create any new ones and suddenly it's stable > for months. > > > > On Fri, Dec 13, 2013 at 12:14 AM, Ryan Baldwin > wrote: > >> Hi, >> >> We have a server based on FreeBSD 9.2 which hangs at times on a daily >> basis. The longest uptime we have achieved is 5 days conversely it has >> stopped daily several days in a row. >> >> When this occurs it appears there are two proceses stuck in 'tx->tx' >> state. In the top output shown these are snapshot-manager processes which >> create and destroy snapshots generally and sometime rollback filesystems to >> snapshots. When the lockup occurs other processes which try to access the >> file system can seem to end up stuck in state 'rrl->r'. The reboot command >> that was issued to try and reboot the server has ended up stuck in this >> state as can be seen. >> >> The server is not under particularly heavy load. >> >> It has remained in this state for hours. The 'deadman handler'? does not >> appear to restart the system. Once this has occurred there is no further >> disk activity. >> >> We did not experience this problem at all previously using 9.1 although we >> had less snapshot-manager processes before. We have built this server >> against 9.1 again now but it has only been going one day so far. >> >> We can try and reproduce this problem again on 9.2 if by doing so we can >> gather any additional information that could help resolve this problem. >> Please let me know what other information would be helpful. >> >> The hardware is a Dell R420 with Perc H310 raid controller in JBOD mode >> with the pool mirrored on two SAS disks. >> >> Thanks >> >> top and procstat output follow: >> >> >> last pid: 76225; load averages: 0.00, 0.00, 0.00 up 0+20:04:22 >> 09:56:24 >> 46 processes: 1 running, 44 sleeping, 1 stopped >> CPU: % user, % nice, % system, % interrupt, % idle >> Mem: 405M Active, 133M Inact, 1206M Wired, 6084M Free >> ARC: 797M Total, 184M MFU, 488M MRU, 28M Anon, 32M Header, 65M Other >> Swap: 8192M Total, 8192M Free >> >> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU >> COMMAND >> 1170 root 1 20 0 16244K 5460K kqread 11 1:28 0.00% >> CustomInit >> 1531 root 1 20 0 39648K 6368K tx->tx 7 1:17 0.00% >> snapshot-manager >> 1135 root 1 20 0 16244K 5100K kqread 1 1:10 0.00% >> CustomInit >> 2097 root 2 20 0 2581M 472M STOP 4 0:46 0.00% java >> 1183 root 1 20 0 16244K 5504K kqread 0 0:22 0.00% >> CustomInit >> 1145 root 1 20 0 16244K 5464K kqread 9 0:18 0.00% >> CustomInit >> 1444 root 1 20 0 39648K 6352K tx->tx 5 0:13 0.00% >> snapshot-manager >> 1628 root 1 20 0 39648K 6348K rrl->r 10 0:12 0.00% >> snapshot-manager >> 1168 root 1 20 0 16244K 5388K kqread 7 0:07 0.00% >> CustomInit >> 1535 root 1 20 0 22388K 14296K select 4 0:05 0.00% >> openvpn >> 1163 root 1 20 0 16244K 5388K kqread 2 0:04 0.00% >> CustomInit >> 1511 root 1 20 0 18292K 10192K select 0 0:04 0.00% >> openvpn >> 1156 root 1 20 0 16244K 5392K kqread 10 0:04 0.00% >> CustomInit >> 1174 root 1 20 0 16244K 5388K kqread 1 0:04 0.00% >> CustomInit >> 1161 root 1 20 0 16244K 5392K kqread 4 0:03 0.00% >> CustomInit >> 913 root 1 20 0 12076K 1820K select 7 0:03 0.00% >> syslogd >> 2102 root 1 20 0 109M 13616K zfsvfs 3 0:03 0.00% >> data-processor >> 1445 root 1 20 0 22388K 14296K select 5 0:02 0.00% >> openvpn >> 1014 root 1 20 0 22256K 3360K select 4 0:02 0.00% ntpd >> 1494 root 1 20 0 18292K 10192K select 2 0:01 0.00% >> openvpn >> 1180 root 1 20 0 18292K 4780K select 0 0:01 0.00% >> openvpn >> 1505 root 1 20 0 18292K 4712K select 2 0:01 0.00% >> openvpn >> 1495 root 1 20 0 18292K 4712K select 5 0:00 0.00% >> openvpn >> 1479 root 1 20 0 18292K 4708K select 1 0:00 0.00% >> openvpn >> 1567 root 1 20 0 18292K 4712K select 3 0:00 0.00% >> openvpn >> 1545 root 1 20 0 18292K 4712K select 1 0:00 0.00% >> openvpn >> 1486 root 1 20 0 18292K 4708K select 0 0:00 0.00% >> openvpn >> 1443 root 1 20 0 18292K 4436K select 9 0:00 0.00% >> openvpn >> 1447 root 1 20 0 18292K 4448K select 4 0:00 0.00% >> openvpn >> 1496 root 1 20 0 18292K 4708K select 3 0:00 0.00% >> openvpn >> 1478 root 1 20 0 18292K 4704K select 5 0:00 0.00% >> openvpn >> 1488 root 1 20 0 18292K 10192K select 5 0:00 0.00% >> openvpn >> 1032 root 1 20 0 14176K 1860K nanslp 1 0:00 0.00% cron >> 76069 root 1 20 0 9948K 1680K rrl->r 7 0:00 0.00% reboot >> 782 root 1 20 0 10376K 4416K select 5 0:00 0.00% devd >> 76196 root 1 20 0 51536K 5828K select 0 0:00 0.00% sshd >> 76203 root 1 20 0 51536K 5828K select 2 0:00 0.00% sshd >> 76205 root 1 20 0 17564K 3252K ttyin 11 0:00 0.00% csh >> 76198 root 1 20 0 17564K 3252K pause 8 0:00 0.00% csh >> 75718 root 1 20 0 35556K 3956K rrl->r 11 0:00 0.00% >> snapshot-manager-counts >> 75715 root 1 20 0 35556K 3956K rrl->r 10 0:00 0.00% >> snapshot-manager-counts >> 1181 root 1 52 0 12084K 1676K ttyin 1 0:00 0.00% getty >> >> >> 0 100000 kernel swapper mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 scheduler+0x359 mi_startup+0x77 >> btext+0x2c >> 0 100032 kernel firmware taskq mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100037 kernel ffs_trim taskq mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100038 kernel acpi_task_0 mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100039 kernel acpi_task_1 mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100040 kernel acpi_task_2 mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100041 kernel kqueue taskq mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100044 kernel thread taskq mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100052 kernel bge0 taskq mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100053 kernel bge1 taskq mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100062 kernel mca taskq mi_switch+0x186 >> sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 >> fork_exit+0x11f fork_trampoline+0xe >> 0 100063 kernel system_taskq_0 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100064 kernel system_taskq_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100065 kernel system_taskq_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100066 kernel system_taskq_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100067 kernel system_taskq_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100068 kernel system_taskq_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100069 kernel system_taskq_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100070 kernel system_taskq_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100071 kernel system_taskq_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100072 kernel system_taskq_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100073 kernel system_taskq_10 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100074 kernel system_taskq_11 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100247 kernel zio_null_issue mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100248 kernel zio_null_intr mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100249 kernel zio_read_issue_0 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100250 kernel zio_read_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100251 kernel zio_read_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100252 kernel zio_read_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100253 kernel zio_read_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100254 kernel zio_read_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100255 kernel zio_read_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100256 kernel zio_read_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100257 kernel zio_read_intr_0 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100258 kernel zio_read_intr_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100259 kernel zio_read_intr_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100260 kernel zio_read_intr_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100261 kernel zio_read_intr_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100262 kernel zio_read_intr_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100263 kernel zio_read_intr_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100264 kernel zio_read_intr_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100265 kernel zio_read_intr_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100266 kernel zio_read_intr_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100267 kernel zio_read_intr_10 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100268 kernel zio_read_intr_11 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100269 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100270 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100271 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100272 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100273 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100274 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100275 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100276 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100277 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100278 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100279 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100280 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100281 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100282 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100283 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100284 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100285 kernel zio_write_issue_ mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100286 kernel zio_write_intr_0 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100287 kernel zio_write_intr_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100288 kernel zio_write_intr_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100289 kernel zio_write_intr_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100290 kernel zio_write_intr_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100291 kernel zio_write_intr_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100292 kernel zio_write_intr_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100293 kernel zio_write_intr_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100294 kernel zio_write_intr_h mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100295 kernel zio_write_intr_h mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100296 kernel zio_write_intr_h mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100297 kernel zio_write_intr_h mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100298 kernel zio_write_intr_h mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100299 kernel zio_free_issue_0 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100300 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100301 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100302 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100303 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100304 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100305 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100306 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100307 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100308 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100309 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100310 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100311 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100312 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100313 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100314 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100315 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100316 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100317 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100318 kernel zio_free_issue_1 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100319 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100320 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100321 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100322 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100323 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100324 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100325 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100326 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100327 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100328 kernel zio_free_issue_2 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100329 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100330 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100331 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100332 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100333 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100334 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100335 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100336 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100337 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100338 kernel zio_free_issue_3 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100339 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100340 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100341 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100342 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100343 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100344 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100345 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100346 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100347 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100348 kernel zio_free_issue_4 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100349 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100350 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100351 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100352 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100353 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100354 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100355 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100356 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100357 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100358 kernel zio_free_issue_5 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100359 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100360 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100361 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100362 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100363 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100364 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100365 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100366 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100367 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100368 kernel zio_free_issue_6 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100369 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100370 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100371 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100372 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100373 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100374 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100375 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100376 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100377 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100378 kernel zio_free_issue_7 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100379 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100380 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100381 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100382 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100383 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100384 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100385 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100386 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100387 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100388 kernel zio_free_issue_8 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100389 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100390 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100391 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100392 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100393 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100394 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100395 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100396 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100397 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100398 kernel zio_free_issue_9 mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100399 kernel zio_free_intr mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100400 kernel zio_claim_issue mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100401 kernel zio_claim_intr mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100402 kernel zio_ioctl_issue mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100403 kernel zio_ioctl_intr mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100405 kernel zfs_vn_rele_task mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100418 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100441 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100477 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100478 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100479 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100480 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100481 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100482 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100484 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100486 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100488 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100489 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100490 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100491 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100492 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100493 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100494 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100495 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100496 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100497 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100498 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100499 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100636 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 0 100638 kernel zil_clean mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb fork_exit+0x11f >> fork_trampoline+0xe >> 1 100002 init - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_wait6+0x8c3 kern_wait+0x9c sys_wait4+0x35 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 2 100033 crypto - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 crypto_proc+0x197 fork_exit+0x11f >> fork_trampoline+0xe >> 3 100034 crypto returns - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 crypto_ret_proc+0x192 fork_exit+0x11f >> fork_trampoline+0xe >> 4 100061 ctl_thrd - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 ctl_work_thread+0x2fd0 fork_exit+0x11f >> fork_trampoline+0xe >> 5 100075 zfskern arc_reclaim_thre mi_switch+0x186 >> sleepq_timedwait+0x42 _cv_timedwait+0x135 arc_reclaim_thread+0x29d >> fork_exit+0x11f fork_trampoline+0xe >> 5 100076 zfskern l2arc_feed_threa mi_switch+0x186 >> sleepq_timedwait+0x42 _cv_timedwait+0x135 l2arc_feed_thread+0x1a2 >> fork_exit+0x11f fork_trampoline+0xe >> 5 100404 zfskern trim nsgroot mi_switch+0x186 >> sleepq_timedwait+0x42 _cv_timedwait+0x135 trim_thread+0x68 fork_exit+0x11f >> fork_trampoline+0xe >> 5 100406 zfskern txg_thread_enter mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 txg_thread_wait+0x79 >> txg_quiesce_thread+0xbb fork_exit+0x11f fork_trampoline+0xe >> 5 100407 zfskern txg_thread_enter mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_write+0x35 >> dsl_sync_task_sync+0xb8 dsl_pool_sync+0x47d spa_sync+0x3ba >> txg_sync_thread+0x139 fork_exit+0x11f fork_trampoline+0xe >> 5 100408 zfskern zvol nsgroot/swa mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 zvol_geom_worker+0xfc fork_exit+0x11f >> fork_trampoline+0xe >> 6 100077 sctp_iterator - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 sctp_iterator_thread+0x41 fork_exit+0x11f >> fork_trampoline+0xe >> 7 100078 xpt_thrd - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 xpt_scanner_thread+0xff fork_exit+0x11f >> fork_trampoline+0xe >> 8 100079 ipmi0: kcs - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 ipmi_dequeue_request+0x47 kcs_loop+0x3d >> fork_exit+0x11f fork_trampoline+0xe >> 9 100080 enc_daemon0 - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 enc_daemon+0xe4 fork_exit+0x11f >> fork_trampoline+0xe >> 10 100001 audit - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 audit_worker+0x359 fork_exit+0x11f >> fork_trampoline+0xe >> 11 100003 idle idle: cpu0 >> 11 100004 idle idle: cpu1 >> 11 100005 idle idle: cpu2 >> 11 100006 idle idle: cpu3 >> 11 100007 idle idle: cpu4 >> 11 100008 idle idle: cpu5 >> 11 100009 idle idle: cpu6 >> 11 100010 idle idle: cpu7 >> 11 100011 idle idle: cpu8 >> 11 100012 idle idle: cpu9 >> 11 100013 idle idle: cpu10 mi_switch+0x186 >> critical_exit+0xa5 sched_idletd+0x118 fork_exit+0x11f fork_trampoline+0xe >> 11 100014 idle idle: cpu11 >> 12 100015 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100016 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100017 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100018 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100019 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100020 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100021 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100022 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100023 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100024 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100025 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100026 intr swi4: clock mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100027 intr swi3: vm >> 12 100028 intr swi1: netisr 0 mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100036 intr swi6: task queue mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100042 intr swi2: cambio mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100043 intr swi5: fast taskq >> 12 100045 intr swi6: Giant task mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100046 intr irq264: mfi0 mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100047 intr irq23: ehci0 mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100054 intr irq22: ehci1 mi_switch+0x186 >> ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe >> 12 100059 intr irq267: ahci0 >> 12 100060 intr swi0: uart uart >> 13 100029 geom g_event mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 g_run_events+0x440 fork_exit+0x11f >> fork_trampoline+0xe >> 13 100030 geom g_up mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 g_io_schedule_up+0xe6 g_up_procbody+0x5c >> fork_exit+0x11f fork_trampoline+0xe >> 13 100031 geom g_down mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 g_io_schedule_down+0x25f g_down_procbody+0x5c >> fork_exit+0x11f fork_trampoline+0xe >> 14 100035 yarrow - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 random_kthread+0x1ea fork_exit+0x11f >> fork_trampoline+0xe >> 15 100048 usb usbus0 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100049 usb usbus0 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100050 usb usbus0 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100051 usb usbus0 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100055 usb usbus1 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100056 usb usbus1 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100057 usb usbus1 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 15 100058 usb usbus1 mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f >> fork_trampoline+0xe >> 16 100081 pagedaemon - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 vm_pageout+0xb34 fork_exit+0x11f >> fork_trampoline+0xe >> 17 100082 vmdaemon - mi_switch+0x186 >> sleepq_wait+0x42 _sleep+0x379 vm_daemon+0x58 fork_exit+0x11f >> fork_trampoline+0xe >> 18 100083 pagezero - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 vm_pagezero+0x83 fork_exit+0x11f >> fork_trampoline+0xe >> 19 100084 bufdaemon - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 buf_daemon+0x1e1 fork_exit+0x11f >> fork_trampoline+0xe >> 20 100085 syncer - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e >> sync_fsync+0x1a2 VOP_FSYNC_APV+0x68 sync_vnode+0x16b sched_sync+0x1c5 >> fork_exit+0x11f fork_trampoline+0xe >> 21 100086 vnlru - mi_switch+0x186 >> sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 >> zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 >> vnlru_proc+0x61e fork_exit+0x11f fork_trampoline+0xe >> 22 100087 softdepflush - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 softdep_flush+0x375 fork_exit+0x11f >> fork_trampoline+0xe >> 782 100432 devd - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d kern_select+0x6ef sys_select+0x5d >> amd64_syscall+0x540 Xfast_syscall+0xf7 >> 790 100545 pfpurge - mi_switch+0x186 >> sleepq_timedwait+0x42 _sleep+0x1c9 pf_purge_thread+0x31 fork_exit+0x11f >> fork_trampoline+0xe >> 913 100449 syslogd - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1014 100581 ntpd - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1028 100532 sshd - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1032 100473 cron - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _sleep+0x2ca >> kern_nanosleep+0x118 sys_nanosleep+0x6e amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1135 100558 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1145 100509 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1156 100508 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1161 100507 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1163 100521 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1168 100574 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1170 100561 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1174 100560 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1180 100420 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1181 100463 getty - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 >> dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1182 100430 getty - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 >> dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1183 100421 CustomInit - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1443 100599 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1444 100527 snapshot-manager - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 >> dsl_destroy_snapshots_nvl+0x71 dsl_destroy_snapshot+0x4a >> zfs_ioc_destroy+0x3e zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 >> sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1445 100514 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1447 100557 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1478 100555 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1479 100551 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1486 100576 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1488 100563 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1494 100575 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1495 100442 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1496 100556 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1505 100431 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1511 100475 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1531 100429 snapshot-manager - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 dsl_sync_task+0x139 >> zfs_ioc_rollback+0xb8 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b >> kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 >> 1535 100434 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1545 100459 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1567 100539 openvpn - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 >> _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 1628 100454 snapshot-manager - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 >> dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d >> devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 2097 100710 java - mi_switch+0x186 >> thread_suspend_switch+0xcd thread_single+0x1b2 exit1+0x72 sys_sys_exit+0xe >> amd64_syscall+0x540 Xfast_syscall+0xf7 >> 2097 100792 java - mi_switch+0x186 >> sleepq_wait+0x42 __lockmgr_args+0x5cb vop_stdlock+0x39 VOP_LOCK1_APV+0x70 >> _vn_lock+0x47 zfsctl_freebsd_root_lookup+0xd5 VOP_LOOKUP_APV+0x62 >> lookup+0x437 namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 >> sys_stat+0x2a amd64_syscall+0x540 Xfast_syscall+0xf7 >> 2102 100635 data-processor dm_siri_to_chiro mi_switch+0x186 >> sleepq_wait+0x42 _sx_slock_hard+0x318 _sx_slock+0x56 >> zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 >> getnewvnode+0x27d gfs_file_create+0x4b gfs_dir_create+0x16 >> zfsctl_snapdir_lookup+0x474 VOP_LOOKUP_APV+0x62 lookup+0x437 namei+0x4ac >> kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a >> 75715 100424 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 >> _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a >> zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b >> kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 >> 75718 100861 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 >> _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a >> zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b >> kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 >> 75719 100578 snapshot-manager-counts mi_switch+0x186 sleepq_wait+0x42 >> _cv_wait+0x112 rrw_enter_read+0x4b dsl_pool_hold+0x44 dmu_objset_hold+0x2a >> zfs_ioc_objset_stats+0x23 zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b >> kern_ioctl+0x106 sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 >> 76069 101690 reboot - mi_switch+0x186 >> sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e >> sys_sync+0x1e8 amd64_syscall+0x540 Xfast_syscall+0xf7 >> 76196 100847 sshd - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 76198 101378 csh - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 76200 100892 top - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a >> tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 devfs_read_f+0x90 >> dofileread+0xa1 kern_readv+0x6c sys_read+0x64 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 76203 100447 sshd - >> 76205 101410 csh - mi_switch+0x186 >> sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 >> kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 >> Xfast_syscall+0xf7 >> 76209 101030 procstat - >> >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk. From owner-freebsd-fs@FreeBSD.ORG Sat Dec 14 00:57:13 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6BDCE711 for ; Sat, 14 Dec 2013 00:57:12 +0000 (UTC) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2A66C10F5 for ; Sat, 14 Dec 2013 00:57:11 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: X-IronPort-AV: E=Sophos;i="4.95,482,1384318800"; d="scan'208";a="79459242" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 13 Dec 2013 19:57:05 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 4C2FBB4039; Fri, 13 Dec 2013 19:57:05 -0500 (EST) Date: Fri, 13 Dec 2013 19:57:05 -0500 (EST) From: Rick Macklem To: FreeBSD Filesystems Message-ID: <1972597003.30638311.1386982625256.JavaMail.root@uoguelph.ca> Subject: Does zfs_vget() work for snapshot files? MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 7.2.1_GA_2790 (ZimbraWebClient - FF3.0 (Win)/7.2.1_GA_2790) Cc: Steve Dickson X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Dec 2013 00:57:13 -0000 Jason sent me a packet capture for an NFSv4 readdir of a .zfs/snapshot directory and the attributes for the snapshot entries in the directory appear like they may be bogus. They have a file type==VREG and the same fsid as .zfs/snapshot. (If the client does a lookup of the snapshot, the attributes of it come back as VDIR, fileno==4 and a different fsid than .zfs/snapshot.) There is code in zfs_vget() that makes readdir to switch to using VOP_LOOKUP() when the entry in the directory is a snapshot directory, but this doesn't appear to happen for the snapshots in a snapshot directory. (I am wondering if VFS_VGET() somehow gets a bogus vnode?) Also, does anyone know if the snapshots appear to be separate file systems with v_mountedhere set? (They do appear to have different fsid values, which implies a separate file system.) If not, I may be able to do a "workaround" by comparing the fsid of the directory being read with the fsid of the entry, instead of depending on v_mountedhere being non-NULL to indicate a different file system. Thanks in advance for any help with this, rick From owner-freebsd-fs@FreeBSD.ORG Sat Dec 14 03:14:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A5312A04 for ; Sat, 14 Dec 2013 03:14:50 +0000 (UTC) Received: from mail-wi0-x229.google.com (mail-wi0-x229.google.com [IPv6:2a00:1450:400c:c05::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2B8D31AA9 for ; Sat, 14 Dec 2013 03:14:50 +0000 (UTC) Received: by mail-wi0-f169.google.com with SMTP id hn6so88334wib.0 for ; Fri, 13 Dec 2013 19:14:48 -0800 (PST) X-Received: by 10.194.60.103 with SMTP id g7mr4630130wjr.37.1386990888646; Fri, 13 Dec 2013 19:14:48 -0800 (PST) Received: from mail-wg0-x22d.google.com (mail-wg0-x22d.google.com [2a00:1450:400c:c00::22d]) by mx.google.com with ESMTPSA id mz10sm3278492wic.2.2013.12.13.19.14.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 19:14:47 -0800 (PST) Received: by mail-wg0-f45.google.com with SMTP id y10so2674244wgg.24 for ; Fri, 13 Dec 2013 19:14:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=zXF8Ju6QYfv49km0qsOKsSZMvfFw/ZyLKzmpFFRK5rs=; b=jfC2yK25fh94FSChNRfa3MLh/l0ZsqxF3s1fwCd9FcXGR9A0WwvE3uKgA4kEw7tU/m IObP4KrzYYfoRk0vJjkVILjyi7tYVLCKUpuq5z5AERQqE9szq+EeCHiu4IDkK+KGYpBo jEkwgfxZH1LaEm6iqqc0Q+6wOMJcIEAgOICva4kCrMyeGrQsbz7O9YX6EeieBXzsxFf7 bDnv5YT5HYU5YSKRROtTEDZ7Pq/f437KJHLJkjsFYiaCsF3Jg/veL6FkNzNFpEr7BN+X 2gqGnagB3TwuZ7uCDOmd5gLktCacBvvzXoLJVwed3nXcmMkP/CKqJI5JA3j/Tdm+Vma+ ENgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=zXF8Ju6QYfv49km0qsOKsSZMvfFw/ZyLKzmpFFRK5rs=; b=DgO1mm3eJNODoC+a+z0xxfv8khrITIGVP9kQZ7O6t7eqdomUGcH5bydIyW6wuc+yeC xtfEabQuJBNlddH9n3tkibP6ZbuIPlRmi5O4ZghzUBksAWMPQxMYMn0zxy0Pa1wcv8oM JwOqbKIomIREEH6LCjYJ+TcgVmnyJh2QIYfubnbNoNpegzCA5Yt52LO8S6gwj6esU0LA ldsqpUV/Uc9PDPY9RhlUnnNCOQPVmsGIXdvOCDAmpTddNsR2IQaKiAJWUKf612UA7ZVg uBygz9+zlOm80J+XMLIOHnDwgiwQhD9HtbDmGQE+WF5ZkJZ+WDGgB74xrNZGswAdCwHm RzvQ== X-Gm-Message-State: ALoCoQnXeIKRP+ro7TYEDDG6Wy4Nf1IPBu9nAhRZTbLbis15v/AEex+a+xdBaEH1nafuZ1tokZxT MIME-Version: 1.0 X-Received: by 10.194.84.72 with SMTP id w8mr4612881wjy.55.1386990886918; Fri, 13 Dec 2013 19:14:46 -0800 (PST) Received: by 10.194.166.100 with HTTP; Fri, 13 Dec 2013 19:14:46 -0800 (PST) In-Reply-To: References: <52AA97B8.8060408@nexusalpha.com> Date: Fri, 13 Dec 2013 22:14:46 -0500 Message-ID: Subject: Re: ZFS related hang with FreeBSD 9.2 From: Rod Taylor To: Steven Hartland Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Dec 2013 03:14:50 -0000 On Fri, Dec 13, 2013 at 7:55 PM, Steven Hartland wrote: > Are you doing any snapshot sends as well as interacting with > snapshots such as listing files in them via the .zfs? > I didn't take much time to debug it as the snapshots created by zfSnap were for local backups only. Off-site backups are a simple pg_dump. Snapshots were unmounted and untouched by me, nor do they show up in zfs list by default. They did not get copied/imported to any other machines. No clones were in use. With zfSnap periodics enabled with the following configuration, the machine spontaneously reboots about once a week. 9.0 was much worse, I could push it over with simple heavy IO such a query performing a sequential table scan in PostgreSQL on a 40GB table. 9.2 only seemed to go down during periodic runs. I've not been able to push it over at any other time. Anyway, /var/crash remains empty after a reboot (dumpdev="AUTO"). I *think* the problem is related to creating snapshots during high load. Though the problem is significantly reduced if I disabled deletes. I've been unable to manually trigger a crash on 9.2 using zfSnap commands; but they still occur with regularity during periodics and spontaneously during the day. ZFS v28 under 9.0/9.1, and all feature flags enabled under 9.2. Nothing is logged. Relevant snippet from periodic.conf: # Filesystem snapshots daily_zfsnap_enable="YES" daily_zfsnap_recursive_fs="tank0" daily_zfsnap_flags="-s -S" daily_zfsnap_ttl=2m monthly_zfsnap_enable="YES" monthly_zfsnap_recursive_fs="tank0" monthly_zfsnap_flags="-s -S" monthly_zfsnap_ttl=6m reboot_zfsnap_enable="YES" reboot_zfsnap_flags="-s -S" reboot_zfsnap_recursive_fs="tank0" weekly_zfsnap_delete_enable="YES" weekly_zfsnap_delete_flags="-s -S" weekly_zfsnap_recursive_fs="tank0" > If so make sure you have the following patch applied as > that can cause a deadlock between these two operations > http://svnweb.freebsd.org/changeset/base/258595 > I have not tried this patch but can over the holidays. ----- Original Message ----- From: "Rod Taylor" > To: "Ryan Baldwin" > Cc: > Sent: Friday, December 13, 2013 11:21 PM > Subject: Re: ZFS related hang with FreeBSD 9.2 > > > > Are you using snapshots? >> >> I've found ZFS Snapshots on 9.0, 9.1, and 9.2 regularly crash the system. >> Delete the snapshots and don't create any new ones and suddenly it's >> stable >> for months. >> >> >> >> On Fri, Dec 13, 2013 at 12:14 AM, Ryan Baldwin >> wrote: >> >> Hi, >>> >>> We have a server based on FreeBSD 9.2 which hangs at times on a daily >>> basis. The longest uptime we have achieved is 5 days conversely it has >>> stopped daily several days in a row. >>> >>> When this occurs it appears there are two proceses stuck in 'tx->tx' >>> state. In the top output shown these are snapshot-manager processes which >>> create and destroy snapshots generally and sometime rollback filesystems >>> to >>> snapshots. When the lockup occurs other processes which try to access the >>> file system can seem to end up stuck in state 'rrl->r'. The reboot >>> command >>> that was issued to try and reboot the server has ended up stuck in this >>> state as can be seen. >>> >>> The server is not under particularly heavy load. >>> >>> It has remained in this state for hours. The 'deadman handler'? does not >>> appear to restart the system. Once this has occurred there is no further >>> disk activity. >>> >>> We did not experience this problem at all previously using 9.1 although >>> we >>> had less snapshot-manager processes before. We have built this server >>> against 9.1 again now but it has only been going one day so far. >>> >>> We can try and reproduce this problem again on 9.2 if by doing so we can >>> gather any additional information that could help resolve this problem. >>> Please let me know what other information would be helpful. >>> >>> The hardware is a Dell R420 with Perc H310 raid controller in JBOD mode >>> with the pool mirrored on two SAS disks. >>> >>> Thanks >>> >>> top and procstat output follow: ... >>> >> From owner-freebsd-fs@FreeBSD.ORG Sat Dec 14 07:13:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EEEDCB14 for ; Sat, 14 Dec 2013 07:13:26 +0000 (UTC) Received: from smtp.nexusalpha.com (smtp.nexusalpha.com [213.48.13.50]) by mx1.freebsd.org (Postfix) with ESMTP id CDB401E1C for ; Sat, 14 Dec 2013 07:13:25 +0000 (UTC) Received: from [192.168.6.154] ([192.168.1.243]) by smtp.nexusalpha.com with Microsoft SMTPSVC(6.0.3790.3959); Sat, 14 Dec 2013 07:13:18 +0000 Message-ID: <52AC0508.5060704@nexusalpha.com> Date: Sat, 14 Dec 2013 07:13:12 +0000 From: Ryan Baldwin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20131029 Thunderbird/17.0.9 MIME-Version: 1.0 To: Rod Taylor Subject: Re: ZFS related hang with FreeBSD 9.2 References: <52AA97B8.8060408@nexusalpha.com> In-Reply-To: X-OriginalArrivalTime: 14 Dec 2013 07:13:18.0077 (UTC) FILETIME=[F692FAD0:01CEF89B] Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.17 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Dec 2013 07:13:27 -0000 Yes we are using snapshots. The snapshot-manager processes are continually creating and destroying snapshots at a rate of 10,000's a day. We have had such a system running on FreeBSD 9.1 with just one snapshot-manager process several months on end and we are not aware of any problems with this system. I run another type of system currently on FreeBSD 9.1 as a storage server and it serves terrabytes a day and uses snapshots and clones extensively and I have not had any problems with this system. This is the first real problem I have come across. So far the 9.1 based version of the problem server is still going although it is still early days. On 12/13/13 23:21, Rod Taylor wrote: > Are you using snapshots? > > I've found ZFS Snapshots on 9.0, 9.1, and 9.2 regularly crash the > system. Delete the snapshots and don't create any new ones and > suddenly it's stable for months. > > > > On Fri, Dec 13, 2013 at 12:14 AM, Ryan Baldwin > > wrote: > > Hi, > > We have a server based on FreeBSD 9.2 which hangs at times on a > daily basis. The longest uptime we have achieved is 5 days > conversely it has stopped daily several days in a row. > > When this occurs it appears there are two proceses stuck in > 'tx->tx' state. In the top output shown these are snapshot-manager > processes which create and destroy snapshots generally and > sometime rollback filesystems to snapshots. When the lockup occurs > other processes which try to access the file system can seem to > end up stuck in state 'rrl->r'. The reboot command that was issued > to try and reboot the server has ended up stuck in this state as > can be seen. > > The server is not under particularly heavy load. > > It has remained in this state for hours. The 'deadman handler'? > does not appear to restart the system. Once this has occurred > there is no further disk activity. > > We did not experience this problem at all previously using 9.1 > although we had less snapshot-manager processes before. We have > built this server against 9.1 again now but it has only been going > one day so far. > > We can try and reproduce this problem again on 9.2 if by doing so > we can gather any additional information that could help resolve > this problem. Please let me know what other information would be > helpful. > > The hardware is a Dell R420 with Perc H310 raid controller in JBOD > mode with the pool mirrored on two SAS disks. > > Thanks > > top and procstat output follow: > > > last pid: 76225; load averages: 0.00, 0.00, 0.00 up 0+20:04:22 > 09:56:24 > 46 processes: 1 running, 44 sleeping, 1 stopped > CPU: % user, % nice, % system, % interrupt, % idle > Mem: 405M Active, 133M Inact, 1206M Wired, 6084M Free > ARC: 797M Total, 184M MFU, 488M MRU, 28M Anon, 32M Header, 65M Other > Swap: 8192M Total, 8192M Free > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU > COMMAND > 1170 root 1 20 0 16244K 5460K kqread 11 1:28 0.00% > CustomInit > 1531 root 1 20 0 39648K 6368K tx->tx 7 1:17 0.00% > snapshot-manager > 1135 root 1 20 0 16244K 5100K kqread 1 1:10 0.00% > CustomInit > 2097 root 2 20 0 2581M 472M STOP 4 0:46 0.00% > java > 1183 root 1 20 0 16244K 5504K kqread 0 0:22 0.00% > CustomInit > 1145 root 1 20 0 16244K 5464K kqread 9 0:18 0.00% > CustomInit > 1444 root 1 20 0 39648K 6352K tx->tx 5 0:13 0.00% > snapshot-manager > 1628 root 1 20 0 39648K 6348K rrl->r 10 0:12 0.00% > snapshot-manager > 1168 root 1 20 0 16244K 5388K kqread 7 0:07 0.00% > CustomInit > 1535 root 1 20 0 22388K 14296K select 4 0:05 0.00% > openvpn > 1163 root 1 20 0 16244K 5388K kqread 2 0:04 0.00% > CustomInit > 1511 root 1 20 0 18292K 10192K select 0 0:04 0.00% > openvpn > 1156 root 1 20 0 16244K 5392K kqread 10 0:04 0.00% > CustomInit > 1174 root 1 20 0 16244K 5388K kqread 1 0:04 0.00% > CustomInit > 1161 root 1 20 0 16244K 5392K kqread 4 0:03 0.00% > CustomInit > 913 root 1 20 0 12076K 1820K select 7 0:03 0.00% > syslogd > 2102 root 1 20 0 109M 13616K zfsvfs 3 0:03 0.00% > data-processor > 1445 root 1 20 0 22388K 14296K select 5 0:02 0.00% > openvpn > 1014 root 1 20 0 22256K 3360K select 4 0:02 0.00% > ntpd > 1494 root 1 20 0 18292K 10192K select 2 0:01 0.00% > openvpn > 1180 root 1 20 0 18292K 4780K select 0 0:01 0.00% > openvpn > 1505 root 1 20 0 18292K 4712K select 2 0:01 0.00% > openvpn > 1495 root 1 20 0 18292K 4712K select 5 0:00 0.00% > openvpn > 1479 root 1 20 0 18292K 4708K select 1 0:00 0.00% > openvpn > 1567 root 1 20 0 18292K 4712K select 3 0:00 0.00% > openvpn > 1545 root 1 20 0 18292K 4712K select 1 0:00 0.00% > openvpn > 1486 root 1 20 0 18292K 4708K select 0 0:00 0.00% > openvpn > 1443 root 1 20 0 18292K 4436K select 9 0:00 0.00% > openvpn > 1447 root 1 20 0 18292K 4448K select 4 0:00 0.00% > openvpn > 1496 root 1 20 0 18292K 4708K select 3 0:00 0.00% > openvpn > 1478 root 1 20 0 18292K 4704K select 5 0:00 0.00% > openvpn > 1488 root 1 20 0 18292K 10192K select 5 0:00 0.00% > openvpn > 1032 root 1 20 0 14176K 1860K nanslp 1 0:00 0.00% > cron > 76069 root 1 20 0 9948K 1680K rrl->r 7 0:00 0.00% > reboot > 782 root 1 20 0 10376K 4416K select 5 0:00 0.00% > devd > 76196 root 1 20 0 51536K 5828K select 0 0:00 0.00% > sshd > 76203 root 1 20 0 51536K 5828K select 2 0:00 0.00% > sshd > 76205 root 1 20 0 17564K 3252K ttyin 11 0:00 0.00% csh > 76198 root 1 20 0 17564K 3252K pause 8 0:00 0.00% csh > 75718 root 1 20 0 35556K 3956K rrl->r 11 0:00 0.00% > snapshot-manager-counts > 75715 root 1 20 0 35556K 3956K rrl->r 10 0:00 0.00% > snapshot-manager-counts > 1181 root 1 52 0 12084K 1676K ttyin 1 0:00 0.00% > getty > > > 0 100000 kernel swapper mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 scheduler+0x359 mi_startup+0x77 > btext+0x2c > 0 100032 kernel firmware taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100037 kernel ffs_trim taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100038 kernel acpi_task_0 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100039 kernel acpi_task_1 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100040 kernel acpi_task_2 mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100041 kernel kqueue taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100044 kernel thread taskq mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100052 kernel bge0 taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100053 kernel bge1 taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100062 kernel mca taskq mi_switch+0x186 > sleepq_wait+0x42 msleep_spin+0x194 taskqueue_thread_loop+0x67 > fork_exit+0x11f fork_trampoline+0xe > 0 100063 kernel system_taskq_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100064 kernel system_taskq_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100065 kernel system_taskq_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100066 kernel system_taskq_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100067 kernel system_taskq_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100068 kernel system_taskq_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100069 kernel system_taskq_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100070 kernel system_taskq_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100071 kernel system_taskq_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100072 kernel system_taskq_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100073 kernel system_taskq_10 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100074 kernel system_taskq_11 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100247 kernel zio_null_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100248 kernel zio_null_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100249 kernel zio_read_issue_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100250 kernel zio_read_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100251 kernel zio_read_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100252 kernel zio_read_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100253 kernel zio_read_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100254 kernel zio_read_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100255 kernel zio_read_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100256 kernel zio_read_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100257 kernel zio_read_intr_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100258 kernel zio_read_intr_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100259 kernel zio_read_intr_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100260 kernel zio_read_intr_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100261 kernel zio_read_intr_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100262 kernel zio_read_intr_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100263 kernel zio_read_intr_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100264 kernel zio_read_intr_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100265 kernel zio_read_intr_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100266 kernel zio_read_intr_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100267 kernel zio_read_intr_10 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100268 kernel zio_read_intr_11 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100269 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100270 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100271 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100272 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100273 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100274 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100275 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100276 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100277 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100278 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100279 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100280 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100281 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100282 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100283 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100284 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100285 kernel zio_write_issue_ mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100286 kernel zio_write_intr_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100287 kernel zio_write_intr_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100288 kernel zio_write_intr_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100289 kernel zio_write_intr_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100290 kernel zio_write_intr_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100291 kernel zio_write_intr_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100292 kernel zio_write_intr_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100293 kernel zio_write_intr_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100294 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100295 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100296 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100297 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100298 kernel zio_write_intr_h mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100299 kernel zio_free_issue_0 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100300 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100301 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100302 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100303 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100304 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100305 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100306 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100307 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100308 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100309 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100310 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100311 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100312 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100313 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100314 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100315 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100316 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100317 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100318 kernel zio_free_issue_1 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100319 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100320 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100321 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100322 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100323 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100324 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100325 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100326 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100327 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100328 kernel zio_free_issue_2 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100329 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100330 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100331 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100332 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100333 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100334 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100335 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100336 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100337 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100338 kernel zio_free_issue_3 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100339 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100340 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100341 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100342 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100343 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100344 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100345 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100346 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100347 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100348 kernel zio_free_issue_4 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100349 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100350 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100351 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100352 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100353 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100354 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100355 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100356 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100357 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100358 kernel zio_free_issue_5 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100359 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100360 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100361 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100362 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100363 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100364 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100365 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100366 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100367 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100368 kernel zio_free_issue_6 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100369 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100370 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100371 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100372 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100373 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100374 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100375 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100376 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100377 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100378 kernel zio_free_issue_7 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100379 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100380 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100381 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100382 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100383 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100384 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100385 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100386 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100387 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100388 kernel zio_free_issue_8 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100389 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100390 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100391 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100392 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100393 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100394 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100395 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100396 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100397 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100398 kernel zio_free_issue_9 mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100399 kernel zio_free_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100400 kernel zio_claim_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100401 kernel zio_claim_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100402 kernel zio_ioctl_issue mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100403 kernel zio_ioctl_intr mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100405 kernel zfs_vn_rele_task mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100418 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100441 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100477 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100478 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100479 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100480 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100481 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100482 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100484 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100486 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100488 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100489 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100490 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100491 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100492 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100493 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100494 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100495 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100496 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100497 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100498 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100499 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100636 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 0 100638 kernel zil_clean mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 taskqueue_thread_loop+0xbb > fork_exit+0x11f fork_trampoline+0xe > 1 100002 init - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_wait6+0x8c3 kern_wait+0x9c sys_wait4+0x35 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 2 100033 crypto - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 crypto_proc+0x197 fork_exit+0x11f fork_trampoline+0xe > 3 100034 crypto returns - > mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 > crypto_ret_proc+0x192 fork_exit+0x11f fork_trampoline+0xe > 4 100061 ctl_thrd - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 ctl_work_thread+0x2fd0 fork_exit+0x11f > fork_trampoline+0xe > 5 100075 zfskern arc_reclaim_thre mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 arc_reclaim_thread+0x29d > fork_exit+0x11f fork_trampoline+0xe > 5 100076 zfskern l2arc_feed_threa mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 l2arc_feed_thread+0x1a2 > fork_exit+0x11f fork_trampoline+0xe > 5 100404 zfskern trim nsgroot mi_switch+0x186 > sleepq_timedwait+0x42 _cv_timedwait+0x135 trim_thread+0x68 > fork_exit+0x11f fork_trampoline+0xe > 5 100406 zfskern txg_thread_enter mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_thread_wait+0x79 > txg_quiesce_thread+0xbb fork_exit+0x11f fork_trampoline+0xe > 5 100407 zfskern txg_thread_enter mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_write+0x35 > dsl_sync_task_sync+0xb8 dsl_pool_sync+0x47d spa_sync+0x3ba > txg_sync_thread+0x139 fork_exit+0x11f fork_trampoline+0xe > 5 100408 zfskern zvol nsgroot/swa mi_switch+0x186 > sleepq_wait+0x42 _sleep+0x379 zvol_geom_worker+0xfc > fork_exit+0x11f fork_trampoline+0xe > 6 100077 sctp_iterator - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 sctp_iterator_thread+0x41 fork_exit+0x11f > fork_trampoline+0xe > 7 100078 xpt_thrd - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 xpt_scanner_thread+0xff fork_exit+0x11f > fork_trampoline+0xe > 8 100079 ipmi0: kcs - mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 ipmi_dequeue_request+0x47 kcs_loop+0x3d > fork_exit+0x11f fork_trampoline+0xe > 9 100080 enc_daemon0 - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 enc_daemon+0xe4 fork_exit+0x11f fork_trampoline+0xe > 10 100001 audit - mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 audit_worker+0x359 fork_exit+0x11f fork_trampoline+0xe > 11 100003 idle idle: cpu0 > 11 100004 idle idle: cpu1 > 11 100005 idle idle: cpu2 > 11 100006 idle idle: cpu3 > 11 100007 idle idle: cpu4 > 11 100008 idle idle: cpu5 > 11 100009 idle idle: cpu6 > 11 100010 idle idle: cpu7 > 11 100011 idle idle: cpu8 > 11 100012 idle idle: cpu9 > 11 100013 idle idle: cpu10 mi_switch+0x186 > critical_exit+0xa5 sched_idletd+0x118 fork_exit+0x11f > fork_trampoline+0xe > 11 100014 idle idle: cpu11 > 12 100015 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100016 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100017 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100018 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100019 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100020 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100021 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100022 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100023 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100024 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100025 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100026 intr swi4: clock mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100027 intr swi3: vm > 12 100028 intr swi1: netisr 0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100036 intr swi6: task queue mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100042 intr swi2: cambio mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100043 intr swi5: fast taskq > 12 100045 intr swi6: Giant task mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100046 intr irq264: mfi0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100047 intr irq23: ehci0 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100054 intr irq22: ehci1 mi_switch+0x186 > ithread_loop+0x21e fork_exit+0x11f fork_trampoline+0xe > 12 100059 intr irq267: ahci0 > 12 100060 intr swi0: uart uart > 13 100029 geom g_event > mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 g_run_events+0x440 > fork_exit+0x11f fork_trampoline+0xe > 13 100030 geom g_up > mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 > g_io_schedule_up+0xe6 g_up_procbody+0x5c fork_exit+0x11f > fork_trampoline+0xe > 13 100031 geom g_down > mi_switch+0x186 sleepq_wait+0x42 _sleep+0x379 > g_io_schedule_down+0x25f g_down_procbody+0x5c fork_exit+0x11f > fork_trampoline+0xe > 14 100035 yarrow - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 random_kthread+0x1ea > fork_exit+0x11f fork_trampoline+0xe > 15 100048 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100049 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100050 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100051 usb usbus0 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100055 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100056 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100057 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 15 100058 usb usbus1 mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 usb_process+0x18b fork_exit+0x11f > fork_trampoline+0xe > 16 100081 pagedaemon - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 vm_pageout+0xb34 > fork_exit+0x11f fork_trampoline+0xe > 17 100082 vmdaemon - mi_switch+0x186 sleepq_wait+0x42 > _sleep+0x379 vm_daemon+0x58 fork_exit+0x11f fork_trampoline+0xe > 18 100083 pagezero - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 vm_pagezero+0x83 > fork_exit+0x11f fork_trampoline+0xe > 19 100084 bufdaemon - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 buf_daemon+0x1e1 > fork_exit+0x11f fork_trampoline+0xe > 20 100085 syncer - mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e sync_fsync+0x1a2 > VOP_FSYNC_APV+0x68 sync_vnode+0x16b sched_sync+0x1c5 > fork_exit+0x11f fork_trampoline+0xe > 21 100086 vnlru - mi_switch+0x186 sleepq_wait+0x42 > _sx_slock_hard+0x318 _sx_slock+0x56 zfs_freebsd_reclaim+0x4b > VOP_RECLAIM_APV+0x68 vgonel+0x134 vnlru_free+0x362 > vnlru_proc+0x61e fork_exit+0x11f fork_trampoline+0xe > 22 100087 softdepflush - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 softdep_flush+0x375 > fork_exit+0x11f fork_trampoline+0xe > 782 100432 devd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d kern_select+0x6ef > sys_select+0x5d amd64_syscall+0x540 Xfast_syscall+0xf7 > 790 100545 pfpurge - mi_switch+0x186 > sleepq_timedwait+0x42 _sleep+0x1c9 pf_purge_thread+0x31 > fork_exit+0x11f fork_trampoline+0xe > 913 100449 syslogd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1014 100581 ntpd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1028 100532 sshd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1032 100473 cron - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 _sleep+0x2ca > kern_nanosleep+0x118 sys_nanosleep+0x6e amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1135 100558 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1145 100509 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1156 100508 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1161 100507 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1163 100521 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1168 100574 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1170 100561 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1174 100560 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1180 100420 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1181 100463 getty - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 > devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1182 100430 getty - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 > devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1183 100421 CustomInit - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_kevent+0x369 sys_kevent+0x90 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 1443 100599 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1444 100527 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 > dsl_sync_task+0x139 dsl_destroy_snapshots_nvl+0x71 > dsl_destroy_snapshot+0x4a zfs_ioc_destroy+0x3e zfsdev_ioctl+0x58d > devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1445 100514 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1447 100557 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1478 100555 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1479 100551 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1486 100576 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1488 100563 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1494 100575 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1495 100442 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1496 100556 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1505 100431 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1511 100475 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1531 100429 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 txg_wait_synced+0x85 > dsl_sync_task+0x139 zfs_ioc_rollback+0xb8 zfsdev_ioctl+0x58d > devfs_ioctl_f+0x7b kern_ioctl+0x106 sys_ioctl+0xfd > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1535 100434 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1545 100459 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1567 100539 openvpn - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_timedwait_sig+0x19 > _cv_timedwait_sig+0x135 seltdwait+0x8d sys_poll+0x478 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 1628 100454 snapshot-manager - mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b > dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 > zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 > sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 2097 100710 java - > mi_switch+0x186 thread_suspend_switch+0xcd thread_single+0x1b2 > exit1+0x72 sys_sys_exit+0xe amd64_syscall+0x540 Xfast_syscall+0xf7 > 2097 100792 java - > mi_switch+0x186 sleepq_wait+0x42 __lockmgr_args+0x5cb > vop_stdlock+0x39 VOP_LOCK1_APV+0x70 _vn_lock+0x47 > zfsctl_freebsd_root_lookup+0xd5 VOP_LOOKUP_APV+0x62 lookup+0x437 > namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a > amd64_syscall+0x540 Xfast_syscall+0xf7 > 2102 100635 data-processor dm_siri_to_chiro > mi_switch+0x186 sleepq_wait+0x42 _sx_slock_hard+0x318 > _sx_slock+0x56 zfs_freebsd_reclaim+0x4b VOP_RECLAIM_APV+0x68 > vgonel+0x134 vnlru_free+0x362 getnewvnode+0x27d > gfs_file_create+0x4b gfs_dir_create+0x16 > zfsctl_snapdir_lookup+0x474 VOP_LOOKUP_APV+0x62 lookup+0x437 > namei+0x4ac kern_statat_vnhook+0xb3 kern_statat+0x15 sys_stat+0x2a > 75715 100424 snapshot-manager-counts mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b > dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 > zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 > sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 75718 100861 snapshot-manager-counts mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b > dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 > zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 > sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 75719 100578 snapshot-manager-counts mi_switch+0x186 > sleepq_wait+0x42 _cv_wait+0x112 rrw_enter_read+0x4b > dsl_pool_hold+0x44 dmu_objset_hold+0x2a zfs_ioc_objset_stats+0x23 > zfsdev_ioctl+0x58d devfs_ioctl_f+0x7b kern_ioctl+0x106 > sys_ioctl+0xfd amd64_syscall+0x540 Xfast_syscall+0xf7 > 76069 101690 reboot - mi_switch+0x186 sleepq_wait+0x42 > _cv_wait+0x112 rrw_enter_read+0x4b zfs_sync+0x5e sys_sync+0x1e8 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 76196 100847 sshd - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > seltdwait+0xf6 kern_select+0x6ef sys_select+0x5d > amd64_syscall+0x540 Xfast_syscall+0xf7 > 76198 101378 csh - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76200 100892 top - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _cv_wait_sig+0x12a > tty_wait+0x25 ttydisc_read+0x2dd ttydev_read+0xc4 > devfs_read_f+0x90 dofileread+0xa1 kern_readv+0x6c sys_read+0x64 > amd64_syscall+0x540 Xfast_syscall+0xf7 > 76203 100447 sshd - > 76205 101410 csh - mi_switch+0x186 > sleepq_catch_signals+0x2e1 sleepq_wait_sig+0x16 _sleep+0x295 > kern_sigsuspend+0xab sys_sigsuspend+0x34 amd64_syscall+0x540 > Xfast_syscall+0xf7 > 76209 101030 procstat - > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to > "freebsd-fs-unsubscribe@freebsd.org > " > >