From owner-freebsd-fs@FreeBSD.ORG Sun Mar 31 10:03:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id DAC294DE for ; Sun, 31 Mar 2013 10:03:16 +0000 (UTC) (envelope-from scott@kitchin.com) Received: from mail-lb0-f181.google.com (mail-lb0-f181.google.com [209.85.217.181]) by mx1.freebsd.org (Postfix) with ESMTP id 655253A5 for ; Sun, 31 Mar 2013 10:03:15 +0000 (UTC) Received: by mail-lb0-f181.google.com with SMTP id r11so1255855lbv.12 for ; Sun, 31 Mar 2013 03:03:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :content-type:content-transfer-encoding:x-gm-message-state; bh=V/ETm/bkTeJZmis+UN29eC4kQEyNNpAExjTPXOOOHRQ=; b=fT+oud/4IESFt66vOoxpOsB1hAlOQ6sw3JpRP7BSVb74GsPQdUVqNWPZLg1UBMtOo7 ylFABZfl+VYKNs01FzndpP9+cmMl8Mq0kKnidg64NGBx1SMDTH1pZjPUxYCgVKg9HMG9 vTPM8YMIbfw8AEmtu9uajo4/0SvvDbnRt/yDz92Wr44MJXmnuhIJvs974qGo105pMOhB J0z8V7ncPxQagjEqz4M1iA6VYYLZIr9eOvsjWfWdGYSdWxmL2hIb/VzAssfyhWoRGjO5 ntchsQEWyitsderqkoq4EIuwAGMmFVPR9MyPyyaQOGM4BE7EE8ccvUqzuWzqj2ulSUgJ rifg== X-Received: by 10.152.134.164 with SMTP id pl4mr3885414lab.54.1364724189415; Sun, 31 Mar 2013 03:03:09 -0700 (PDT) Received: from [192.168.1.100] (broadband-77-37-224-170.nationalcablenetworks.ru. [77.37.224.170]) by mx.google.com with ESMTPS id nv3sm3728072lbb.15.2013.03.31.03.03.08 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 31 Mar 2013 03:03:08 -0700 (PDT) Message-ID: <515809EC.2060505@kitchin.com> Date: Sun, 31 Mar 2013 14:03:24 +0400 From: Scott Kitchin User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: freebsd-fs@FreeBSD.org Subject: FreeBSD deadlock Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Gm-Message-State: ALoCoQnUSPB0Svtx4/ougKVwkmZs6JrU/5m38g2U0chuXb3SoMq+zLDdKtMJNhMWs9IaWKcb5LWd X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 31 Mar 2013 10:03:16 -0000 I want to report that FreeBSD 9.1 (release/releng/stable) is having problem with ZFS and iRedMail. I was able to reproduce this on three machines including virtualbox but I don't have debugging skill to do this. 1) FreeBSD 9.1 (release/releng/stable) 2) ZFS in raid1 configuration. I created swap partition outside of ZFS and no change. 3) Installed iRedMail mail server from http://www.iredmail.org/ Before installing iRedMail, FreeBSD and ZFS are working normal. After I installed iRedMail and rebooted the server to load iRedMail configs. I rebooted the server again and it hangs after "All buffers synced". I did the above again with FreeBSD 9.0 and it work normal and reboot/shutdown without hang. This problem is easy to reproduce since I tried different configurations and disabled many modules including usb in kernel as well. FreeBSD 9.1 and iRedMail reboot/shutdown normal without ZFS. Scott From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 11:06:42 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id ED503898 for ; Mon, 1 Apr 2013 11:06:42 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id D125B32B for ; Mon, 1 Apr 2013 11:06:42 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r31B6g4v033634 for ; Mon, 1 Apr 2013 11:06:42 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r31B6gwS033632 for freebsd-fs@FreeBSD.org; Mon, 1 Apr 2013 11:06:42 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 1 Apr 2013 11:06:42 GMT Message-Id: <201304011106.r31B6gwS033632@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 11:06:43 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/177445 fs [hast] HAST panic o kern/177240 fs [zfs] zpool import failed with state UNAVAIL but all d o kern/176978 fs [zfs] [panic] zfs send -D causes "panic: System call i o kern/176857 fs [softupdates] [panic] 9.1-RELEASE/amd64/GENERIC panic o bin/176253 fs zpool(8): zfs pool indentation is misleading/wrong o kern/176141 fs [zfs] sharesmb=on makes errors for sharenfs, and still o kern/175950 fs [zfs] Possible deadlock in zfs after long uptime o kern/175897 fs [zfs] operations on readonly zpool hang o kern/175179 fs [zfs] ZFS may attach wrong device on move o kern/175071 fs [ufs] [panic] softdep_deallocate_dependencies: unrecov o kern/174372 fs [zfs] Pagefault appears to be related to ZFS o kern/174315 fs [zfs] chflags uchg not supported o kern/174310 fs [zfs] root point mounting broken on CURRENT with multi o kern/174279 fs [ufs] UFS2-SU+J journal and filesystem corruption o kern/174060 fs [ext2fs] Ext2FS system crashes (buffer overflow?) o kern/173830 fs [zfs] Brain-dead simple change to ZFS error descriptio o kern/173718 fs [zfs] phantom directory in zraid2 pool f kern/173657 fs [nfs] strange UID map with nfsuserd o kern/173363 fs [zfs] [panic] Panic on 'zpool replace' on readonly poo o kern/173136 fs [unionfs] mounting above the NFS read-only share panic o kern/172348 fs [unionfs] umount -f of filesystem in use with readonly o kern/172334 fs [unionfs] unionfs permits recursive union mounts; caus o kern/171626 fs [tmpfs] tmpfs should be noisier when the requested siz o kern/171415 fs [zfs] zfs recv fails with "cannot receive incremental o kern/170945 fs [gpt] disk layout not portable between direct connect o bin/170778 fs [zfs] [panic] FreeBSD panics randomly o kern/170680 fs [nfs] Multiple NFS Client bug in the FreeBSD 7.4-RELEA o kern/170497 fs [xfs][panic] kernel will panic whenever I ls a mounted o kern/169945 fs [zfs] [panic] Kernel panic while importing zpool (afte o kern/169480 fs [zfs] ZFS stalls on heavy I/O o kern/169398 fs [zfs] Can't remove file with permanent error o kern/169339 fs panic while " : > /etc/123" o kern/169319 fs [zfs] zfs resilver can't complete o kern/168947 fs [nfs] [zfs] .zfs/snapshot directory is messed up when o kern/168942 fs [nfs] [hang] nfsd hangs after being restarted (not -HU o kern/168158 fs [zfs] incorrect parsing of sharenfs options in zfs (fs o kern/167979 fs [ufs] DIOCGDINFO ioctl does not work on 8.2 file syste o kern/167977 fs [smbfs] mount_smbfs results are differ when utf-8 or U o kern/167688 fs [fusefs] Incorrect signal handling with direct_io o kern/167685 fs [zfs] ZFS on USB drive prevents shutdown / reboot o kern/167612 fs [portalfs] The portal file system gets stuck inside po o kern/167272 fs [zfs] ZFS Disks reordering causes ZFS to pick the wron o kern/167260 fs [msdosfs] msdosfs disk was mounted the second time whe o kern/167109 fs [zfs] [panic] zfs diff kernel panic Fatal trap 9: gene o kern/167105 fs [nfs] mount_nfs can not handle source exports wiht mor o kern/167067 fs [zfs] [panic] ZFS panics the server o kern/167065 fs [zfs] boot fails when a spare is the boot disk o kern/167048 fs [nfs] [patch] RELEASE-9 crash when using ZFS+NULLFS+NF o kern/166912 fs [ufs] [panic] Panic after converting Softupdates to jo o kern/166851 fs [zfs] [hang] Copying directory from the mounted UFS di o kern/166477 fs [nfs] NFS data corruption. o kern/165950 fs [ffs] SU+J and fsck problem o kern/165521 fs [zfs] [hang] livelock on 1 Gig of RAM with zfs when 31 o kern/165392 fs Multiple mkdir/rmdir fails with errno 31 o kern/165087 fs [unionfs] lock violation in unionfs o kern/164472 fs [ufs] fsck -B panics on particular data inconsistency o kern/164370 fs [zfs] zfs destroy for snapshot fails on i386 and sparc o kern/164261 fs [nullfs] [patch] fix panic with NFS served from NULLFS o kern/164256 fs [zfs] device entry for volume is not created after zfs o kern/164184 fs [ufs] [panic] Kernel panic with ufs_makeinode o kern/163801 fs [md] [request] allow mfsBSD legacy installed in 'swap' o kern/163770 fs [zfs] [hang] LOR between zfs&syncer + vnlru leading to o kern/163501 fs [nfs] NFS exporting a dir and a subdir in that dir to o kern/162944 fs [coda] Coda file system module looks broken in 9.0 o kern/162860 fs [zfs] Cannot share ZFS filesystem to hosts with a hyph o kern/162751 fs [zfs] [panic] kernel panics during file operations o kern/162591 fs [nullfs] cross-filesystem nullfs does not work as expe o kern/162519 fs [zfs] "zpool import" relies on buggy realpath() behavi o kern/161968 fs [zfs] [hang] renaming snapshot with -r including a zvo o kern/161864 fs [ufs] removing journaling from UFS partition fails on o bin/161807 fs [patch] add option for explicitly specifying metadata o kern/161579 fs [smbfs] FreeBSD sometimes panics when an smb share is o kern/161533 fs [zfs] [panic] zfs receive panic: system ioctl returnin o kern/161438 fs [zfs] [panic] recursed on non-recursive spa_namespace_ o kern/161424 fs [nullfs] __getcwd() calls fail when used on nullfs mou o kern/161280 fs [zfs] Stack overflow in gptzfsboot o kern/161205 fs [nfs] [pfsync] [regression] [build] Bug report freebsd o kern/161169 fs [zfs] [panic] ZFS causes kernel panic in dbuf_dirty o kern/161112 fs [ufs] [lor] filesystem LOR in FreeBSD 9.0-BETA3 o kern/160893 fs [zfs] [panic] 9.0-BETA2 kernel panic o kern/160860 fs [ufs] Random UFS root filesystem corruption with SU+J o kern/160801 fs [zfs] zfsboot on 8.2-RELEASE fails to boot from root-o o kern/160790 fs [fusefs] [panic] VPUTX: negative ref count with FUSE o kern/160777 fs [zfs] [hang] RAID-Z3 causes fatal hang upon scrub/impo o kern/160706 fs [zfs] zfs bootloader fails when a non-root vdev exists o kern/160591 fs [zfs] Fail to boot on zfs root with degraded raidz2 [r o kern/160410 fs [smbfs] [hang] smbfs hangs when transferring large fil o kern/160283 fs [zfs] [patch] 'zfs list' does abort in make_dataset_ha o kern/159930 fs [ufs] [panic] kernel core o kern/159402 fs [zfs][loader] symlinks cause I/O errors o kern/159357 fs [zfs] ZFS MAXNAMELEN macro has confusing name (off-by- o kern/159356 fs [zfs] [patch] ZFS NAME_ERR_DISKLIKE check is Solaris-s o kern/159351 fs [nfs] [patch] - divide by zero in mountnfs() o kern/159251 fs [zfs] [request]: add FLETCHER4 as DEDUP hash option o kern/159077 fs [zfs] Can't cd .. with latest zfs version o kern/159048 fs [smbfs] smb mount corrupts large files o kern/159045 fs [zfs] [hang] ZFS scrub freezes system o kern/158839 fs [zfs] ZFS Bootloader Fails if there is a Dead Disk o kern/158802 fs amd(8) ICMP storm and unkillable process. o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o f kern/157929 fs [nfs] NFS slow read o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs p kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o bin/153142 fs [zfs] ls -l outputs `ls: ./.zfs: Operation not support o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an f bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis p kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o kern/118318 fs [nfs] NFS server hangs under special circumstances o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 302 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 18:06:03 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 22C6E769; Mon, 1 Apr 2013 18:06:03 +0000 (UTC) (envelope-from ler@lerctr.org) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) by mx1.freebsd.org (Postfix) with ESMTP id 94C7E728; Mon, 1 Apr 2013 18:06:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Content-Type:MIME-Version:Message-ID:Subject:To:Sender:From:Date; bh=YgemlfPkw3rv05ZYyxcYMyGj7L8twhvlrnt04klICWw=; b=o86g+QH/BhSTIEb5bX5xezS32lu8ZVUBpq4A9ABRHkooIZOkc8qkWQeNC28ueZU+oWFUXDxN6evWGKONQ0Bk0nWcQ56LiMZrYHqAj2RHLYPx+vDD9p2w+8iC2IG+40mFqdCRP5rGk/H+aFGc4O6GUGutm/34iEOsAPsAOWz3f+c=; Received: from cpe-72-182-19-162.austin.res.rr.com ([72.182.19.162]:27290 helo=borg) by thebighonker.lerctr.org with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256) (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UMj7A-0008HD-Kg; Mon, 01 Apr 2013 13:06:01 -0500 Date: Mon, 1 Apr 2013 13:05:49 -0500 (CDT) From: Larry Rosenman Sender: ler@borg To: freebsd-current@freebsd.org, freebsd-fs@freebsd.org Subject: [CRASH] ZFS recv (fwd)/CURRENT Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Spam-Score: 2.6 (++) X-LERCTR-Spam-Score: 2.6 (++) X-Spam-Report: SpamScore (2.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, KAM_STOCKTIP=5.5 X-LERCTR-Spam-Report: SpamScore (2.6/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, KAM_STOCKTIP=5.5 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 18:06:03 -0000 Re-Sending. Any ideas, guys/gals? This really gets in my way. -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 512-248-2683 E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893 ---------- Forwarded message ---------- Date: Mon, 25 Mar 2013 09:01:30 -0500 (CDT) From: Larry Rosenman To: freebsd-current@freebsd.org Subject: [CRASH] ZFS recv Greetings, I'm getting a zfs recv crash on 10.0-CURRENT.... I was ssh'd to my 8.4 box and did: zfs send -R -D vault@2013-03-24 | ssh home.lerctr.org zfs recv -F -u -v -d zroot/backups/TBH And on the 2nd (or so) filesystem got the following panic. I have the vmcore as well as sources, and can give ssh access. I can also reproduce this at will. ideas? borg.lerctr.org dumped core - see /var/crash/vmcore.4 Mon Mar 25 08:52:46 CDT 2013 FreeBSD borg.lerctr.org 10.0-CURRENT FreeBSD 10.0-CURRENT #129 r248695: Mon Mar 25 05:03:32 CDT 2013 root@borg.lerctr.org:/usr/obj/usr/src/sys/BORG-DTRACE amd64 panic: page fault GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0x378 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff80531426 stack pointer = 0x28:0xffffff91579193d0 frame pointer = 0x28:0xffffff9157919470 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 1044 (zfs) trap number = 12 panic: page fault cpuid = 0 Uptime: 2m10s Dumping 4913 out of 64747 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% Reading symbols from /boot/kernel/zfs.ko...Reading symbols from /boot/kernel/zfs.ko.symbols...done. done. Loaded symbols for /boot/kernel/zfs.ko Reading symbols from /boot/kernel/acl_nfs4.ko...Reading symbols from /boot/kernel/acl_nfs4.ko.symbols...done. done. Loaded symbols for /boot/kernel/acl_nfs4.ko Reading symbols from /boot/kernel/opensolaris.ko...Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. done. Loaded symbols for /boot/kernel/opensolaris.ko Reading symbols from /boot/kernel/linux.ko...Reading symbols from /boot/kernel/linux.ko.symbols...done. done. Loaded symbols for /boot/kernel/linux.ko Reading symbols from /boot/kernel/coretemp.ko...Reading symbols from /boot/kernel/coretemp.ko.symbols...done. done. Loaded symbols for /boot/kernel/coretemp.ko Reading symbols from /boot/kernel/ichsmb.ko...Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. done. Loaded symbols for /boot/kernel/ichsmb.ko Reading symbols from /boot/kernel/smbus.ko...Reading symbols from /boot/kernel/smbus.ko.symbols...done. done. Loaded symbols for /boot/kernel/smbus.ko Reading symbols from /boot/kernel/ichwd.ko...Reading symbols from /boot/kernel/ichwd.ko.symbols...done. done. Loaded symbols for /boot/kernel/ichwd.ko Reading symbols from /boot/kernel/cpuctl.ko...Reading symbols from /boot/kernel/cpuctl.ko.symbols...done. done. Loaded symbols for /boot/kernel/cpuctl.ko Reading symbols from /boot/kernel/crypto.ko...Reading symbols from /boot/kernel/crypto.ko.symbols...done. done. Loaded symbols for /boot/kernel/crypto.ko Reading symbols from /boot/kernel/cryptodev.ko...Reading symbols from /boot/kernel/cryptodev.ko.symbols...done. done. Loaded symbols for /boot/kernel/cryptodev.ko Reading symbols from /boot/kernel/dtraceall.ko...Reading symbols from /boot/kernel/dtraceall.ko.symbols...done. done. Loaded symbols for /boot/kernel/dtraceall.ko Reading symbols from /boot/kernel/profile.ko...Reading symbols from /boot/kernel/profile.ko.symbols...done. done. Loaded symbols for /boot/kernel/profile.ko Reading symbols from /boot/kernel/cyclic.ko...Reading symbols from /boot/kernel/cyclic.ko.symbols...done. done. Loaded symbols for /boot/kernel/cyclic.ko Reading symbols from /boot/kernel/dtrace.ko...Reading symbols from /boot/kernel/dtrace.ko.symbols...done. done. Loaded symbols for /boot/kernel/dtrace.ko Reading symbols from /boot/kernel/systrace_freebsd32.ko...Reading symbols from /boot/kernel/systrace_freebsd32.ko.symbols...done. done. Loaded symbols for /boot/kernel/systrace_freebsd32.ko Reading symbols from /boot/kernel/systrace.ko...Reading symbols from /boot/kernel/systrace.ko.symbols...done. done. Loaded symbols for /boot/kernel/systrace.ko Reading symbols from /boot/kernel/sdt.ko...Reading symbols from /boot/kernel/sdt.ko.symbols...done. done. Loaded symbols for /boot/kernel/sdt.ko Reading symbols from /boot/kernel/lockstat.ko...Reading symbols from /boot/kernel/lockstat.ko.symbols...done. done. Loaded symbols for /boot/kernel/lockstat.ko Reading symbols from /boot/kernel/fasttrap.ko...Reading symbols from /boot/kernel/fasttrap.ko.symbols...done. done. Loaded symbols for /boot/kernel/fasttrap.ko Reading symbols from /boot/kernel/fbt.ko...Reading symbols from /boot/kernel/fbt.ko.symbols...done. done. Loaded symbols for /boot/kernel/fbt.ko Reading symbols from /boot/kernel/dtnfscl.ko...Reading symbols from /boot/kernel/dtnfscl.ko.symbols...done. done. Loaded symbols for /boot/kernel/dtnfscl.ko Reading symbols from /boot/kernel/dtmalloc.ko...Reading symbols from /boot/kernel/dtmalloc.ko.symbols...done. done. Loaded symbols for /boot/kernel/dtmalloc.ko Reading symbols from /boot/kernel/dtio.ko...Reading symbols from /boot/kernel/dtio.ko.symbols...done. done. Loaded symbols for /boot/kernel/dtio.ko Reading symbols from /boot/modules/vboxdrv.ko...done. Loaded symbols for /boot/modules/vboxdrv.ko Reading symbols from /boot/kernel/fdescfs.ko...Reading symbols from /boot/kernel/fdescfs.ko.symbols...done. done. Loaded symbols for /boot/kernel/fdescfs.ko Reading symbols from /boot/kernel/uhid.ko...Reading symbols from /boot/kernel/uhid.ko.symbols...done. done. Loaded symbols for /boot/kernel/uhid.ko Reading symbols from /boot/modules/vboxnetflt.ko...done. Loaded symbols for /boot/modules/vboxnetflt.ko Reading symbols from /boot/kernel/netgraph.ko...Reading symbols from /boot/kernel/netgraph.ko.symbols...done. done. Loaded symbols for /boot/kernel/netgraph.ko Reading symbols from /boot/kernel/ng_ether.ko...Reading symbols from /boot/kernel/ng_ether.ko.symbols...done. done. Loaded symbols for /boot/kernel/ng_ether.ko Reading symbols from /boot/modules/vboxnetadp.ko...done. Loaded symbols for /boot/modules/vboxnetadp.ko #0 doadump (textdump=) at pcpu.h:229 229 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump (textdump=) at pcpu.h:229 #1 0xffffffff80529060 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:447 #2 0xffffffff805293e7 in panic (fmt=) at /usr/src/sys/kern/kern_shutdown.c:754 #3 0xffffffff80732cd5 in trap_fatal (frame=, eva=) at /usr/src/sys/amd64/amd64/trap.c:872 #4 0xffffffff807330d9 in trap_pfault (frame=0x0, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:730 #5 0xffffffff807326bc in trap (frame=0xffffff9157919320) at /usr/src/sys/amd64/amd64/trap.c:463 #6 0xffffffff8071cc62 in calltrap () at exception.S:228 #7 0xffffffff80531426 in _sx_xlock_hard (sx=0xfffffe01d5b64db8, tid=18446741877920023696, opts=, file=0x0, line=250541281) at /usr/src/sys/kern/kern_sx.c:556 #8 0xffffffff80530f8c in _sx_xlock (sx=0x0, opts=0, file=0x0, line=0) at sx.h:152 #9 0xffffffff80e67251 in dmu_objset_from_ds (ds=0xfffffe01d5b64c00, osp=0xffffff91579195b0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:434 #10 0xffffffff80e649ef in dmu_recv_stream (drc=0xffffff9157919750, fp=, voffp=0xffffff9157919740, cleanup_fd=8, action_handlep=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c:1318 #11 0xffffffff80ed9226 in zfs_ioc_recv (zc=0xffffff8012cc8000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:4084 #12 0xffffffff80ed44a1 in zfsdev_ioctl (dev=, zcmd=, arg=0xffffff8012cc8000 "zroot/backups/TBH/home", flag=, td=) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:5902 #13 0xffffffff80427e0f in devfs_ioctl_f (fp=0xfffffe00c0b360a0, com=3517471259, data=0xffffff8012cc8000, cred=, td=0xfffffe00c0bec490) at /usr/src/sys/fs/devfs/devfs_vnops.c:757 #14 0xffffffff8057359b in kern_ioctl (td=0xfffffe00c0bec490, fd=, com=18446741877920023696) at file.h:306 #15 0xffffffff8057331f in sys_ioctl (td=0xfffffe00c0bec490, uap=0xffffff9157919b80) at /usr/src/sys/kern/sys_generic.c:693 #16 0xffffffff807335d7 in amd64_syscall (td=0xfffffe00c0bec490, traced=0) at subr_syscall.c:134 #17 0xffffffff8071cf4b in Xfast_syscall () at exception.S:387 #18 0x00000008019dacea in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal (kgdb) ------------------------------------------------------------------------ ps -axl UID PID PPID CPU PRI NI VSZ RSS MWCHAN STAT TT TIME COMMAND 0 0 0 0 -8 0 0 0 - DLs - 0:02.51 [kernel] 0 1 0 0 20 0 9428 0 wait DLs - 0:00.10 [init] 0 2 0 0 -16 0 0 0 crypto_w DL - 0:00.00 [crypto] 0 3 0 0 -16 0 0 0 crypto_r DL - 0:00.00 [crypto returns 0 4 0 0 -16 0 0 0 - DL - 0:00.00 [fdc0] 0 5 0 0 -8 0 0 0 tx->tx_s DL - 0:00.08 [zfskern] 0 6 0 0 -16 0 0 0 waiting_ DL - 0:00.00 [sctp_iterator] 0 7 0 0 -16 0 0 0 ccb_scan DL - 0:00.00 [xpt_thrd] 0 8 0 0 -16 0 0 0 - RL - 0:00.00 [pagedaemon] 0 9 0 0 -16 0 0 0 psleep DL - 0:00.00 [vmdaemon] 0 10 0 0 -16 0 0 0 audit_wo DL - 0:00.00 [audit] 0 11 0 0 155 0 0 0 - RL - 6:41.28 [idle] 0 12 0 0 -84 0 0 0 - WL - 0:00.26 [intr] 0 13 0 0 -8 0 0 0 - DL - 0:00.33 [geom] 0 14 0 0 -16 0 0 0 - DL - 0:00.01 [yarrow] 0 15 0 0 -68 0 0 0 - DL - 0:00.01 [usb] 0 16 0 0 -20 0 0 0 VBoxIS DL - 0:00.00 [TIMER] 0 17 0 0 155 0 0 0 pgzero DL - 0:00.00 [pagezero] 0 18 0 0 -16 0 0 0 psleep DL - 0:00.00 [bufdaemon] 0 19 0 0 16 0 0 0 - RL - 0:00.00 [syncer] 0 20 0 0 -16 0 0 0 vlruwt DL - 0:00.00 [vnlru] 0 632 1 0 20 0 13196 0 select Ds - 0:00.00 [devd] 0 757 1 0 20 0 14376 0 select Ds - 0:00.04 [syslogd] 0 760 1 0 -52 0 6180 2088 - Rs - 0:00.00 [watchdogd] 0 774 1 0 20 0 16456 0 select Ds - 0:00.01 [rpcbind] 0 809 1 0 52 0 38948 0 select Ds - 0:00.02 [mountd] 0 815 1 0 52 0 36792 0 select Ds - 0:00.04 [nfsd] 0 817 815 0 52 0 12216 0 rpcsvc D - 0:00.00 [nfsd] 0 853 1 0 20 0 25196 0 select Ds - 0:00.01 [ntpd] 0 874 0 0 -16 0 0 0 sleep DL - 0:00.00 [ng_queue] 0 886 1 0 20 0 34708 0 nanslp Ds - 0:00.01 [perl] 0 890 1 0 52 0 30780 0 nanslp D - 0:00.43 [smartd] 70 899 1 0 20 0 84416 0 select Ds - 0:00.22 [postgres] 70 903 899 0 20 0 84416 0 select Ds - 0:00.00 [postgres] 70 904 899 0 20 0 84416 0 select Ds - 0:00.00 [postgres] 70 905 899 0 20 0 84416 0 select Ds - 0:00.00 [postgres] 70 906 899 0 20 0 44120 0 select Ds - 0:00.00 [postgres] 26 928 1 0 52 0 48644 0 select Ds - 0:00.00 [exim-4.80.1-2] 0 938 1 0 20 0 103216 0 - Rs - 0:00.02 [cupsd] 1028 944 1 0 155 0 40308 0 select Ds - 0:00.05 [boinc_client] 910 948 1 0 20 0 59412 0 uwait Ds - 0:00.00 [bacula-sd] 0 951 1 0 20 0 57068 0 uwait Ds - 0:00.00 [bacula-fd] 910 954 1 0 20 0 72404 0 - Rs - 0:00.00 [bacula-dir] 0 959 1 0 20 0 56264 0 select Ds - 0:00.00 [sshd] 0 969 1 0 20 0 16460 0 nanslp Ds - 0:00.00 [cron] 0 990 1 0 52 0 18576 0 select Ds - 0:00.00 [inetd] 0 1010 1 0 21 0 47584 0 wait Ds - 0:00.00 [login] 0 1011 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1012 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1013 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1014 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1015 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1016 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1017 1 0 52 0 14364 0 ttyin Ds+ - 0:00.00 [getty] 0 1020 1010 0 20 0 16928 0 wait D - 0:00.00 [sh] 0 1022 1020 0 20 0 47640 0 select D+ - 0:00.00 [ssh] 1028 1024 944 0 155 19 54272 0 - RN - 0:17.51 [wcg_hpf2_roset 1028 1025 944 0 155 19 54272 0 - RN - 0:18.11 [wcg_hpf2_roset 1028 1026 944 0 155 19 129292 0 - RN - 0:03.57 [wcgrid_faah_7. 1028 1027 944 0 155 19 1892 0 - RN - 0:00.03 [wcgrid_cep2_6. 1028 1028 944 0 155 19 74216 0 m DN - 0:17.23 [setiathome-6.1 1028 1029 944 0 155 19 74216 0 - RN - 0:19.34 [setiathome-6.1 1028 1030 944 0 155 19 74216 0 m DN - 0:16.21 [setiathome-6.1 1028 1031 944 0 155 19 129292 0 i DN - 0:00.00 [wcgrid_faah_7. 1028 1032 944 0 155 19 74216 0 m DN - 0:17.25 [setiathome-6.1 1028 1033 1027 0 155 19 1892 0 i DN - 0:00.00 [wcgrid_cep2_6. 1028 1034 1033 0 155 19 1892 0 i DN - 0:00.00 [wcgrid_cep2_6. 1028 1035 1024 0 155 19 54272 0 6 DN - 0:00.00 [wcg_hpf2_roset 1028 1036 1035 0 155 19 54272 0 - RN - 0:00.00 [wcg_hpf2_roset 1028 1037 1025 0 155 19 54272 0 6 DN - 0:00.00 [wcg_hpf2_roset 1028 1038 1037 0 155 19 54272 0 6 DN - 0:00.00 [wcg_hpf2_roset 0 1040 959 0 20 0 81532 0 select Ds - 0:00.00 [sshd] 0 1042 1040 0 27 0 20304 0 pause Ds - 0:00.00 [csh] 0 1044 1042 0 21 0 39968 0 - R - 0:00.00 [zfs] ------------------------------------------------------------------------ vmstat -s 339379 cpu context switches 36990 device interrupts 6821 software interrupts 362408 traps 34071033 system calls 21 kernel threads created 834 fork() calls 182 vfork() calls 7 rfork() calls 0 swap pager pageins 0 swap pager pages paged in 0 swap pager pageouts 0 swap pager pages paged out 1567 vnode pager pageins 11054 vnode pager pages paged in 8 vnode pager pageouts 16 vnode pager pages paged out 0 page daemon wakeups 0 pages examined by the page daemon 11 pages reactivated 33160 copy-on-write faults 209 copy-on-write optimized faults 284971 zero fill pages zeroed 0 zero fill pages prezeroed 107 intransit blocking page faults 405648 total VM faults taken 3427 page faults requiring I/O 0 pages affected by kernel thread creation 32298 pages affected by fork() 6408 pages affected by vfork() 22897 pages affected by rfork() 0 pages cached 407800 pages freed 0 pages freed by daemon 0 pages freed by exiting processes 67444 pages active 20205 pages inactive 8 pages in VM cache 860785 pages wired down 15199607 pages free 4096 bytes per page 247996 total name lookups cache hits (93% pos + 1% neg) system 0% per-directory deletions 0%, falsehits 0%, toolong 0% ------------------------------------------------------------------------ vmstat -m Type InUse MemUse HighUse Requests Size(s) DEVFS3 207 52K - 226 256 filedesc 78 251K - 1058 16,32,64,128,2048,4096 filedesc_to_leader 11 1K - 11 64 sigio 1 1K - 1 64 kdtrace 500 112K - 1633 64,256 kenv 84 11K - 112 16,32,64,128 kqueue 2 3K - 52 256,2048 proc-args 46 4K - 373 16,32,64,128,256 DEVFS1 181 91K - 199 512 hhook 2 1K - 2 128 ithread 119 21K - 119 32,128,256 KTRACE 100 13K - 100 128 DEVFS 40 1K - 41 16,128 linker 351 171K - 477 16,32,64,128,256,512,1024,2048,4096 DEVFSP 1 1K - 11 64 lockf 50 6K - 86 64,128 loginclass 2 1K - 6 64 cache 1 1K - 1 32 devbuf 17037 34759K - 17243 16,32,64,128,256,512,1024,2048,4096 temp 56 35K - 10350 16,32,64,128,256,512,1024,2048,4096 ip6ndp 5 1K - 6 64,128 module 293 37K - 293 128 mtx_pool 2 16K - 2 osd 8 1K - 20 16,32,64,128 pmchooks 1 1K - 1 128 pgrp 36 5K - 51 128 session 34 5K - 45 128 proc 2 256K - 2 subproc 231 350K - 1202 512,4096 cred 108 17K - 8456 64,256 plimit 19 5K - 202 256 uidinfo 6 33K - 9 128 NFS fh 1 1K - 7 16 sysctl 0 0K - 380 16,32,64 sysctloid 6406 317K - 6553 16,32,64,128 sysctltmp 0 0K - 338 16,32,64,128,256,4096 tidhash 1 256K - 1 callout 9 3208K - 9 umtx 1062 133K - 1062 128 p1003.1b 1 1K - 1 16 SWAP 12 19681K - 12 64 bus 900 86K - 4751 16,32,64,128,256,512,1024 bus-sc 135 290K - 1989 16,32,64,128,256,512,1024,2048,4096 devstat 24 49K - 24 32,4096 eventhandler 86 7K - 86 64,128 kobj 169 676K - 681 4096 Per-cpu 1 1K - 1 32 rman 292 33K - 686 16,32,128 sbuf 0 0K - 2785 16,32,64,128,256,512,1024,2048,4096 stack 0 0K - 2 256 taskqueue 105 16K - 135 16,32,64,256,1024 Unitno 19 2K - 227 32,64 ioctlops 1 8K - 13701 16,32,64,128,256,512,1024,2048 select 34 5K - 34 128 iov 0 0K - 1301 16,64,128,256,512 msg 4 30K - 4 2048,4096 sem 4 106K - 4 2048,4096 shm 11 40K - 22 2048 tty 19 19K - 21 1024,2048 mbuf_tag 0 0K - 58 32,128 shmfd 1 8K - 1 soname 7 1K - 697 16,32,128 pcb 38 8341K - 95 16,32,128,1024,2048 vfscache 1 16384K - 1 vfs_hash 1 8192K - 1 vnodes 1 1K - 1 256 mount 397 20K - 1897 16,32,64,128,256,512 vnodemarker 0 0K - 202 512 BPF 3 1K - 3 128 ifnet 4 7K - 4 128,2048 ifaddr 48 15K - 48 32,64,128,256,512,2048,4096 ether_multi 40 3K - 46 16,32,64 clone 6 1K - 6 128 arpcom 2 1K - 2 16 lltable 12 5K - 12 256,512 newnfsmnt 1 1K - 1 1024 pfs_nodes 21 6K - 21 256 pfs_vncache 76 5K - 81 64 GEOM 344 60K - 2226 16,32,64,128,256,512,1024,2048 routetbl 35 5K - 240 32,64,128,256,512 igmp 3 1K - 3 256 in_multi 2 1K - 2 256 ppbusdev 2 1K - 2 256 entropy 1024 64K - 1024 64 sctp_a_it 0 0K - 3 16 sctp_vrf 1 1K - 1 64 sctp_ifa 5 1K - 5 128 sctp_ifn 2 1K - 2 128 sctp_iter 0 0K - 3 256 hostcache 1 28K - 1 syncache 1 100K - 1 in6_mfilter 1 1K - 1 1024 in6_multi 22 3K - 22 32,256 ip6_moptions 2 1K - 2 32,256 CAM DEV 15 30K - 24 2048 UART 6 5K - 6 16,1024 mld 3 1K - 3 128 ata_pci 1 1K - 1 64 CAM CCB 13 26K - 101 2048 rpc 22 6K - 28 32,64,128,256,512,1024 audit_evclass 188 6K - 227 32 vm_pgdata 7 8193K - 7 128 UMAHash 1 1K - 1 512 raid_data 0 0K - 336 32,128,256 acpiintr 1 1K - 1 64 acpica 1830 187K - 57239 16,32,64,128,256,512,1024,2048 CAM path 22 1K - 68 32 CAM periph 16 4K - 41 16,32,64,128,256 acpitask 1 8K - 1 acpisem 21 3K - 21 128 CAM queue 39 2K - 133 16,32,64 memdesc 1 4K - 1 4096 acpidev 36 3K - 36 64 atkbddev 2 1K - 2 64 CAM dev queue 8 1K - 8 128 USB 38 41K - 40 16,32,64,128,256,512,1024,4096 md_nvidia_data 0 0K - 54 512 USBdev 30 6K - 40 64,128,512 md_sii_data 0 0K - 54 512 CAM SIM 8 2K - 8 256 CAM XPT 58 4K - 204 16,32,64,128,1024 kbdmux 7 18K - 7 16,512,1024,2048 apmdev 1 1K - 1 128 madt_table 0 0K - 1 4096 LED 4 1K - 4 16,128 isadev 5 1K - 5 128 io_apic 2 4K - 2 2048 MCA 8 1K - 8 128 pci_link 16 2K - 16 32,64,128 msi 2 1K - 2 128 nexusdev 4 1K - 4 16 acpi_perf 8 1K - 8 64 scsi_cd 0 0K - 11 16 cdev 8 2K - 8 256 solaris 62374 344297K - 749641 16,32,64,128,256,512,1024,2048,4096 linux 36 2K - 36 32,64 cpuctl 1 1K - 9 64,4096 crypto 1 1K - 1 512 xform 0 0K - 206 16,32 cyclic 32 3K - 32 16,64,128 kstat_data 5 1K - 5 64 fbt 1 256K - 1 iprtheap 23 56K - 23 32,64,128,256,2048 fdesc_mount 1 1K - 1 16 netgraph_node 4 1K - 4 128,256 netgraph 2 1K - 2 64 ------------------------------------------------------------------------ vmstat -z ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP UMA Kegs: 208, 0, 254, 1, 254, 0, 0 UMA Zones: 1408, 0, 254, 0, 254, 0, 0 UMA Slabs: 568, 0, 16533, 120, 31690, 0, 0 UMA RCntSlabs: 568, 0, 1217, 1, 1217, 0, 0 UMA Hash: 256, 0, 82, 8, 83, 0, 0 16 Bucket: 152, 0, 391, 9, 391, 0, 0 32 Bucket: 280, 0, 317, 5, 317, 15, 0 64 Bucket: 536, 0, 358, 6, 358, 84, 0 128 Bucket: 1048, 0, 1365, 0, 1365,4185, 0 VM OBJECT: 240, 0, 3359, 257, 14724, 0, 0 RADIX NODE: 144, 16148184, 20410,16127826, 76441, 0, 0 MAP: 232, 0, 8, 24, 8, 0, 0 KMAP ENTRY: 120, 4230725, 255, 1233, 48042, 0, 0 MAP ENTRY: 120, 0, 1670, 407, 34695, 0, 0 fakepg: 104, 0, 0, 0, 0, 0, 0 mt_zone: 4112, 0, 283, 22, 283, 0, 0 16: 16, 0, 6, 1170, 8417, 0, 0 16: 16, 0, 3274, 758, 56874, 0, 0 16: 16, 0, 2, 166, 2, 0, 0 16: 16, 0, 87, 1089, 23969, 0, 0 16: 16, 0, 2339, 1021, 2911, 0, 0 16: 16, 0, 550, 962, 3590, 0, 0 16: 16, 0, 8, 832, 41, 0, 0 16: 16, 0, 62, 946, 270, 0, 0 32: 32, 0, 31, 878, 4358, 0, 0 32: 32, 0, 2152, 1989, 31736, 0, 0 32: 32, 0, 29, 577, 29, 0, 0 32: 32, 0, 122, 787, 4245, 0, 0 32: 32, 0, 2388, 945, 2665, 0, 0 32: 32, 0, 241, 971, 2379, 0, 0 32: 32, 0, 71, 838, 280, 0, 0 32: 32, 0, 200, 406, 329, 0, 0 64: 64, 0, 84, 476, 810, 0, 0 64: 64, 0, 24198, 3130, 222936, 0, 0 64: 64, 0, 1107, 517, 2080, 0, 0 64: 64, 0, 729, 559, 25092, 0, 0 64: 64, 0, 324, 460, 358, 0, 0 64: 64, 0, 8983, 873, 16224, 0, 0 64: 64, 0, 33, 135, 33, 0, 0 64: 64, 0, 65, 495, 237, 0, 0 128: 128, 0, 34, 198, 80, 0, 0 128: 128, 0, 9778, 1300, 81357, 0, 0 128: 128, 0, 85, 234, 100, 0, 0 128: 128, 0, 1220, 404, 2193, 0, 0 128: 128, 0, 2953, 585, 3025, 0, 0 128: 128, 0, 567, 361, 1209, 0, 0 128: 128, 0, 16, 100, 16, 0, 0 128: 128, 0, 80, 239, 427, 0, 0 256: 256, 0, 10, 20, 12, 0, 0 256: 256, 0, 2880, 3525, 105923, 0, 0 256: 256, 0, 490, 80, 783, 0, 0 256: 256, 0, 240, 75, 1045, 0, 0 256: 256, 0, 5, 70, 7, 0, 0 256: 256, 0, 302, 208, 4700, 0, 0 256: 256, 0, 22, 23, 22, 0, 0 256: 256, 0, 129, 216, 1507, 0, 0 512: 512, 0, 169, 41, 210, 0, 0 512: 512, 0, 10146, 347, 128131, 0, 0 512: 512, 0, 4, 59, 76, 0, 0 512: 512, 0, 217, 77, 1180, 0, 0 512: 512, 0, 12, 44, 120, 0, 0 512: 512, 0, 137, 136, 357, 0, 0 512: 512, 0, 0, 0, 0, 0, 0 512: 512, 0, 0, 252, 877, 0, 0 1024: 1024, 0, 2, 66, 84, 0, 0 1024: 1024, 0, 325, 187, 2938, 0, 0 1024: 1024, 0, 4, 4, 4, 0, 0 1024: 1024, 0, 6, 14, 1917, 0, 0 1024: 1024, 0, 19, 9, 19, 0, 0 1024: 1024, 0, 15, 321, 2674, 0, 0 1024: 1024, 0, 1, 7, 1, 0, 0 1024: 1024, 0, 13, 15, 164, 0, 0 2048: 2048, 0, 38, 96, 326, 0, 0 2048: 2048, 0, 411, 237, 4477, 0, 0 2048: 2048, 0, 0, 0, 0, 0, 0 2048: 2048, 0, 81, 133, 1080, 0, 0 2048: 2048, 0, 1, 7, 3, 0, 0 2048: 2048, 0, 23, 95, 170, 0, 0 2048: 2048, 0, 0, 0, 0, 0, 0 2048: 2048, 0, 0, 8, 65, 0, 0 4096: 4096, 0, 67, 100, 1038, 0, 0 4096: 4096, 0, 6394, 4781, 99018, 0, 0 4096: 4096, 0, 169, 100, 681, 0, 0 4096: 4096, 0, 24, 41, 28, 0, 0 4096: 4096, 0, 10, 44, 12, 0, 0 4096: 4096, 0, 27, 63, 7787, 0, 0 4096: 4096, 0, 0, 0, 0, 0, 0 4096: 4096, 0, 0, 0, 0, 0, 0 Files: 80, 0, 192, 393, 17327, 0, 0 TURNSTILE: 136, 0, 532, 108, 532, 0, 0 rl_entry: 40, 0, 75, 681, 75, 0, 0 umtx pi: 96, 0, 0, 0, 0, 0, 0 MAC labels: 40, 0, 0, 0, 0, 0, 0 PROC: 1208, 0, 73, 89, 1044, 0, 0 THREAD: 1168, 0, 425, 106, 587, 0, 0 SLEEPQUEUE: 80, 0, 532, 164, 532, 0, 0 VMSPACE: 392, 0, 46, 134, 1022, 0, 0 cpuset: 72, 0, 274, 326, 425, 0, 0 cyclic_id_cache: 64, 0, 0, 0, 0, 0, 0 audit_record: 1240, 0, 0, 0, 0, 0, 0 mbuf_packet: 256, 26520435, 1023, 1163, 1760, 0, 0 mbuf: 256, 26520435, 7, 1027, 1874, 0, 0 mbuf_cluster: 2048, 4143818, 2176, 230, 2176, 0, 0 mbuf_jumbo_page: 4096, 2071909, 0, 14, 7, 0, 0 mbuf_jumbo_9k: 9216, 1841694, 0, 0, 0, 0, 0 mbuf_jumbo_16k: 16384, 1381272, 0, 0, 0, 0, 0 mbuf_ext_refcnt: 4, 0, 0, 0, 0, 0, 0 dtrace_state_cache: 4096, 0, 0, 0, 0, 0, 0 g_bio: 248, 0, 2, 538, 109938, 0, 0 ttyinq: 160, 0, 120, 144, 255, 0, 0 ttyoutq: 256, 0, 64, 101, 136, 0, 0 ata_request: 336, 0, 0, 143, 77, 0, 0 ata_composite: 336, 0, 0, 0, 0, 0, 0 vtnet_tx_hdr: 24, 0, 0, 0, 0, 0, 0 cryptop: 88, 0, 0, 0, 0, 0, 0 cryptodesc: 72, 0, 0, 0, 0, 0, 0 FPU_save_area: 512, 0, 0, 0, 0, 0, 0 taskq_zone: 48, 0, 0, 504, 92, 0, 0 VNODE: 472, 0, 9393, 279, 9518, 0, 0 VNODEPOLL: 112, 0, 0, 0, 0, 0, 0 NAMEI: 1024, 0, 0, 128, 53101, 0, 0 S VFS Cache: 108, 0, 9441, 360, 11402, 0, 0 STS VFS Cache: 148, 0, 0, 0, 0, 0, 0 L VFS Cache: 328, 0, 84, 132, 112, 0, 0 LTS VFS Cache: 368, 0, 0, 0, 0, 0, 0 NCLNODE: 528, 0, 1, 13, 1, 0, 0 pipe: 744, 0, 12, 93, 553, 0, 0 space_seg_cache: 64, 0, 9593, 80007, 278530, 0, 0 zio_cache: 944, 0, 5, 7883, 318981, 0, 0 zio_link_cache: 48, 0, 3, 8709, 303395, 0, 0 zio_buf_512: 512, 0, 0, 0, 0, 0, 0 zio_data_buf_512: 512, 0, 0, 0, 0, 0, 0 zio_buf_1024: 1024, 0, 0, 0, 0, 0, 0 zio_data_buf_1024: 1024, 0, 0, 0, 0, 0, 0 zio_buf_1536: 1536, 0, 0, 0, 0, 0, 0 zio_data_buf_1536: 1536, 0, 0, 0, 0, 0, 0 zio_buf_2048: 2048, 0, 0, 0, 0, 0, 0 zio_data_buf_2048: 2048, 0, 0, 0, 0, 0, 0 zio_buf_2560: 2560, 0, 0, 0, 0, 0, 0 zio_data_buf_2560: 2560, 0, 0, 0, 0, 0, 0 zio_buf_3072: 3072, 0, 0, 0, 0, 0, 0 zio_data_buf_3072: 3072, 0, 0, 0, 0, 0, 0 zio_buf_3584: 3584, 0, 0, 0, 0, 0, 0 zio_data_buf_3584: 3584, 0, 0, 0, 0, 0, 0 zio_buf_4096: 4096, 0, 0, 0, 0, 0, 0 zio_data_buf_4096: 4096, 0, 0, 0, 0, 0, 0 zio_buf_5120: 5120, 0, 0, 0, 0, 0, 0 zio_data_buf_5120: 5120, 0, 0, 0, 0, 0, 0 zio_buf_6144: 6144, 0, 0, 0, 0, 0, 0 zio_data_buf_6144: 6144, 0, 0, 0, 0, 0, 0 zio_buf_7168: 7168, 0, 0, 0, 0, 0, 0 zio_data_buf_7168: 7168, 0, 0, 0, 0, 0, 0 zio_buf_8192: 8192, 0, 0, 0, 0, 0, 0 zio_data_buf_8192: 8192, 0, 0, 0, 0, 0, 0 zio_buf_10240: 10240, 0, 0, 0, 0, 0, 0 zio_data_buf_10240: 10240, 0, 0, 0, 0, 0, 0 zio_buf_12288: 12288, 0, 0, 0, 0, 0, 0 zio_data_buf_12288: 12288, 0, 0, 0, 0, 0, 0 zio_buf_14336: 14336, 0, 0, 0, 0, 0, 0 zio_data_buf_14336: 14336, 0, 0, 0, 0, 0, 0 zio_buf_16384: 16384, 0, 0, 0, 0, 0, 0 zio_data_buf_16384: 16384, 0, 0, 0, 0, 0, 0 zio_buf_20480: 20480, 0, 0, 0, 0, 0, 0 zio_data_buf_20480: 20480, 0, 0, 0, 0, 0, 0 zio_buf_24576: 24576, 0, 0, 0, 0, 0, 0 zio_data_buf_24576: 24576, 0, 0, 0, 0, 0, 0 zio_buf_28672: 28672, 0, 0, 0, 0, 0, 0 zio_data_buf_28672: 28672, 0, 0, 0, 0, 0, 0 zio_buf_32768: 32768, 0, 0, 0, 0, 0, 0 zio_data_buf_32768: 32768, 0, 0, 0, 0, 0, 0 zio_buf_36864: 36864, 0, 0, 0, 0, 0, 0 zio_data_buf_36864: 36864, 0, 0, 0, 0, 0, 0 zio_buf_40960: 40960, 0, 0, 0, 0, 0, 0 zio_data_buf_40960: 40960, 0, 0, 0, 0, 0, 0 zio_buf_45056: 45056, 0, 0, 0, 0, 0, 0 zio_data_buf_45056: 45056, 0, 0, 0, 0, 0, 0 zio_buf_49152: 49152, 0, 0, 0, 0, 0, 0 zio_data_buf_49152: 49152, 0, 0, 0, 0, 0, 0 zio_buf_53248: 53248, 0, 0, 0, 0, 0, 0 zio_data_buf_53248: 53248, 0, 0, 0, 0, 0, 0 zio_buf_57344: 57344, 0, 0, 0, 0, 0, 0 zio_data_buf_57344: 57344, 0, 0, 0, 0, 0, 0 zio_buf_61440: 61440, 0, 0, 0, 0, 0, 0 zio_data_buf_61440: 61440, 0, 0, 0, 0, 0, 0 zio_buf_65536: 65536, 0, 0, 0, 0, 0, 0 zio_data_buf_65536: 65536, 0, 0, 0, 0, 0, 0 zio_buf_69632: 69632, 0, 0, 0, 0, 0, 0 zio_data_buf_69632: 69632, 0, 0, 0, 0, 0, 0 zio_buf_73728: 73728, 0, 0, 0, 0, 0, 0 zio_data_buf_73728: 73728, 0, 0, 0, 0, 0, 0 zio_buf_77824: 77824, 0, 0, 0, 0, 0, 0 zio_data_buf_77824: 77824, 0, 0, 0, 0, 0, 0 zio_buf_81920: 81920, 0, 0, 0, 0, 0, 0 zio_data_buf_81920: 81920, 0, 0, 0, 0, 0, 0 zio_buf_86016: 86016, 0, 0, 0, 0, 0, 0 zio_data_buf_86016: 86016, 0, 0, 0, 0, 0, 0 zio_buf_90112: 90112, 0, 0, 0, 0, 0, 0 zio_data_buf_90112: 90112, 0, 0, 0, 0, 0, 0 zio_buf_94208: 94208, 0, 0, 0, 0, 0, 0 zio_data_buf_94208: 94208, 0, 0, 0, 0, 0, 0 zio_buf_98304: 98304, 0, 0, 0, 0, 0, 0 zio_data_buf_98304: 98304, 0, 0, 0, 0, 0, 0 zio_buf_102400: 102400, 0, 0, 0, 0, 0, 0 zio_data_buf_102400: 102400, 0, 0, 0, 0, 0, 0 zio_buf_106496: 106496, 0, 0, 0, 0, 0, 0 zio_data_buf_106496: 106496, 0, 0, 0, 0, 0, 0 zio_buf_110592: 110592, 0, 0, 0, 0, 0, 0 zio_data_buf_110592: 110592, 0, 0, 0, 0, 0, 0 zio_buf_114688: 114688, 0, 0, 0, 0, 0, 0 zio_data_buf_114688: 114688, 0, 0, 0, 0, 0, 0 zio_buf_118784: 118784, 0, 0, 0, 0, 0, 0 zio_data_buf_118784: 118784, 0, 0, 0, 0, 0, 0 zio_buf_122880: 122880, 0, 0, 0, 0, 0, 0 zio_data_buf_122880: 122880, 0, 0, 0, 0, 0, 0 zio_buf_126976: 126976, 0, 0, 0, 0, 0, 0 zio_data_buf_126976: 126976, 0, 0, 0, 0, 0, 0 zio_buf_131072: 131072, 0, 0, 0, 0, 0, 0 zio_data_buf_131072: 131072, 0, 0, 0, 0, 0, 0 sa_cache: 80, 0, 9194, 211, 9286, 0, 0 dnode_t: 856, 0, 9829, 335, 11053, 0, 0 dmu_buf_impl_t: 224, 0, 17993, 384, 21797, 0, 0 arc_buf_hdr_t: 216, 0, 11850, 264, 12999, 0, 0 arc_buf_t: 72, 0, 10407, 443, 14007, 0, 0 zil_lwb_cache: 192, 0, 4, 176, 56, 0, 0 zfs_znode_cache: 368, 0, 9194, 226, 9286, 0, 0 Mountpoints: 816, 0, 30, 25, 30, 0, 0 ksiginfo: 112, 0, 313, 842, 3205, 0, 0 itimer: 352, 0, 1, 21, 1, 0, 0 KNOTE: 128, 0, 6, 226, 67, 0, 0 socket: 680, 2071914, 57, 93, 480, 0, 0 unpcb: 240, 2071920, 11, 165, 114, 0, 0 ipq: 56, 129528, 0, 0, 0, 0, 0 udp_inpcb: 392, 2071920, 21, 99, 324, 0, 0 udpcb: 16, 2071944, 21, 1155, 324, 0, 0 tcp_inpcb: 392, 2071920, 24, 96, 36, 0, 0 tcpcb: 1016, 2071916, 24, 44, 36, 0, 0 tcptw: 72, 27800, 0, 100, 2, 0, 0 syncache: 152, 15375, 0, 50, 1, 0, 0 hostcache: 136, 15372, 0, 0, 0, 0, 0 tcpreass: 40, 259056, 0, 0, 0, 0, 0 sackhole: 32, 0, 0, 0, 0, 0, 0 sctp_ep: 1408, 2071914, 0, 0, 0, 0, 0 sctp_asoc: 2344, 40000, 0, 0, 0, 0, 0 sctp_laddr: 48, 80064, 0, 288, 4, 0, 0 sctp_raddr: 728, 80000, 0, 0, 0, 0, 0 sctp_chunk: 136, 400008, 0, 0, 0, 0, 0 sctp_readq: 104, 400032, 0, 0, 0, 0, 0 sctp_stream_msg_out: 104, 400032, 0, 0, 0, 0, 0 sctp_asconf: 40, 400008, 0, 0, 0, 0, 0 sctp_asconf_ack: 48, 400032, 0, 0, 0, 0, 0 ripcb: 392, 2071920, 0, 0, 0, 0, 0 rtentry: 200, 0, 17, 135, 17, 0, 0 selfd: 56, 0, 83, 484, 5931, 0, 0 SWAPMETA: 288, 8074092, 0, 0, 0, 0, 0 NetGraph items: 72, 4118, 0, 0, 0, 0, 0 NetGraph data items: 72, 522, 0, 0, 0, 0, 0 ------------------------------------------------------------------------ vmstat -i interrupt total rate irq1: atkbd0 2 0 irq6: fdc0 22 0 irq14: ata0 131 0 irq17: uhci0 ehci0 371 1 irq19: uhci1 ahci0+ 35224 136 cpu0:timer 21581 83 irq256: em0 1240 4 cpu7:timer 18860 72 cpu1:timer 17194 66 cpu2:timer 22448 86 cpu3:timer 17242 66 cpu4:timer 21918 84 cpu6:timer 19999 77 cpu5:timer 20715 79 Total 196947 760 ------------------------------------------------------------------------ pstat -T 192/2071909 files 0M/147455M swap space ------------------------------------------------------------------------ pstat -s Device 512-blocks Used Avail Capacity /dev/gpt/swap0 50331392 0 50331392 0% /dev/gpt/swap1 50331392 0 50331392 0% /dev/gpt/swap2 50331392 0 50331392 0% /dev/gpt/swap3 50331392 0 50331392 0% /dev/gpt/swap4 50331392 0 50331392 0% /dev/gpt/swap5 50331392 0 50331392 0% Total 301988352 0 301988352 0% ------------------------------------------------------------------------ iostat iostat: kvm_read(_tk_nin): invalid address (0x0) iostat: disabling TTY statistics ada0 ada1 ada2 cpu KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 14.49 24 0.33 14.56 23 0.32 14.28 24 0.33 0 10 2 0 88 ------------------------------------------------------------------------ ipcs -a Message Queues: T ID KEY MODE OWNER GROUP CREATOR CGROUP CBYTES QNUM QBYTES LSPID LRPID STIME RTIME CTIME Shared Memory: T ID KEY MODE OWNER GROUP CREATOR CGROUP NATTCH SEGSZ CPID LPID ATIME DTIME CTIME m 65536 5432001 --rw------- pgsql pgsql pgsql pgsql 4 41263104 899 899 8:44:47 8:45:50 8:44:47 m 65537 28975112 --rw-rw-rw- boinc nobody boinc nobody 2 8192 944 1028 8:45:34 8:45:34 8:45:33 m 65538 28975113 --rw-rw-rw- boinc nobody boinc nobody 2 8192 944 1029 8:45:34 8:45:34 8:45:33 m 65539 28975114 --rw-rw-rw- boinc nobody boinc nobody 2 8192 944 1030 8:45:34 8:45:34 8:45:33 m 65540 28975115 --rw-rw-rw- boinc nobody boinc nobody 2 8192 944 1032 8:45:34 8:45:34 8:45:34 Semaphores: T ID KEY MODE OWNER GROUP CREATOR CGROUP NSEMS OTIME CTIME s 65536 5432001 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65537 5432002 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65538 5432003 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65539 5432004 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65540 5432005 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65541 5432006 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 s 65542 5432007 --rw------- pgsql pgsql pgsql pgsql 17 8:44:47 8:44:47 ------------------------------------------------------------------------ ipcs -T msginfo: msgmax: 16384 (max characters in a message) msgmni: 40 (# of message queues) msgmnb: 2048 (max characters in a message queue) msgtql: 40 (max # of messages in system) msgssz: 8 (size of a message segment) msgseg: 2048 (# of message segments in system) shminfo: shmmax: 536870912 (max shared memory segment size) shmmin: 1 (min shared memory segment size) shmmni: 192 (max number of shared memory identifiers) shmseg: 128 (max shared memory segments per process) shmall: 131072 (max amount of shared memory in pages) seminfo: semmni: 50 (# of semaphore identifiers) semmns: 340 (# of semaphores in system) semmnu: 150 (# of undo structures in system) semmsl: 340 (max # of semaphores per id) semopm: 100 (max # of operations per semop call) semume: 50 (max # of undo entries per process) semusz: 632 (size in bytes of undo structure) semvmx: 32767 (semaphore maximum value) semaem: 16384 (adjust on exit max value) ------------------------------------------------------------------------ nfsstat Client Info: Rpc Counts: Getattr Setattr Lookup Readlink Read Write Create Remove 2 0 0 0 0 0 0 0 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 0 0 0 0 0 0 0 0 Mknod Fsstat Fsinfo PathConf Commit 0 1 1 0 0 Rpc Info: TimedOut Invalid X Replies Retries Requests 0 0 0 0 4 Cache Info: Attr Hits Misses Lkup Hits Misses BioR Hits Misses BioW Hits Misses 0 0 0 0 0 0 0 0 BioRLHits Misses BioD Hits Misses DirE Hits Misses Accs Hits Misses 0 0 0 0 0 0 0 0 Server Info: Getattr Setattr Lookup Readlink Read Write Create Remove 0 0 0 0 0 0 0 0 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 0 0 0 0 0 0 0 0 Mknod Fsstat Fsinfo PathConf Commit 0 0 0 0 0 Server Ret-Failed 0 Server Faults 0 Server Cache Stats: Inprog Idem Non-idem Misses 0 0 0 0 Server Write Gathering: WriteOps WriteRPC Opsaved 0 0 0 ------------------------------------------------------------------------ netstat -s tcp: 541 packets sent 224 data packets (18558 bytes) 0 data packets (0 bytes) retransmitted 0 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 308 ack-only packets (123 delayed) 0 URG only packets 0 window probe packets 0 window update packets 9 control packets 541 packets received 208 acks (for 17554 bytes) 3 duplicate acks 0 acks for unsent data 511 packets (432616 bytes) received in-sequence 0 completely duplicate packets (0 bytes) 0 old duplicate packets 0 packets with some dup. data (0 bytes duped) 0 out-of-order packets (0 bytes) 0 packets (0 bytes) of data after window 0 window probes 1 window update packet 0 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 0 discarded due to memory problems 6 connection requests 1 connection accept 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 7 connections established (including accepts) 12 connections closed (including 0 drops) 0 connections updated cached RTT on close 0 connections updated cached RTT variance on close 0 connections updated cached ssthresh on close 0 embryonic connections dropped 208 segments updated rtt (of 178 attempts) 0 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 0 keepalive timeouts 0 keepalive probes sent 0 connections dropped by keepalive 7 correct ACK header predictions 318 correct data packet header predictions 1 syncache entry added 0 retransmitted 0 dupsyn 0 dropped 1 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1 cookie sent 0 cookies received 0 hostcache entries added 0 bucket overflow 0 SACK recovery episodes 0 segment rexmits in SACK recovery episodes 0 byte rexmits in SACK recovery episodes 0 SACK options (SACK blocks) received 0 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 120 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 0 with no checksum 0 dropped due to no socket 26 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 94 delivered 95 datagrams output 0 times multicast source filter matched ip: 612 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 packets reassembled ok 610 packets for this host 2 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 0 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 585 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 0 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored 0 message responses generated 0 invalid return addresses 0 no return routes igmp: 2 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 1 V1/V2 membership query received 0 V3 membership queries received 0 membership queries received with invalid field(s) 1 general query received 0 group queries received 0 group-source queries received 0 group-source queries dropped 1 membership report received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 4 ARP requests sent 3 ARP replies sent 40 ARP requests received 2 ARP replies received 42 ARP packets received 1 total packet dropped due to no ARP entry 0 ARP entrys timed out 0 Duplicate IPs seen ip6: 51 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 51 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 58 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 9 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 51 Mbuf statistics: 27 one mbuf 24 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not contiguous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation Output histogram: neighbor solicitation: 1 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output ------------------------------------------------------------------------ netstat -m 1030/2190/3220 mbufs in use (current/cache/total) 1013/1393/2406/4143818 mbuf clusters in use (current/cache/total/max) 1023/1163 mbuf+clusters out of packet secondary zone in use (current/cache) 0/14/14/2071909 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/1841694 9k jumbo clusters in use (current/cache/total/max) 0/0/0/1381272 16k jumbo clusters in use (current/cache/total/max) 2283K/3389K/5673K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines ------------------------------------------------------------------------ netstat -id Name Mtu Network Address Ipkts Ierrs Idrop Opkts Oerrs Coll Drop em0 1500 00:30:48:f2:29:9c 654 0 0 591 0 0 0 em0 1500 192.168.200.0 borg 588 - - 585 - - - em0 1500 fe80::230:48f fe80::230:48ff:fe 0 - - 4 - - - em1* 1500 00:30:48:f2:29:9d 0 0 0 0 0 0 0 lo0 16384 51 0 0 51 0 0 0 lo0 16384 localhost ::1 51 - - 51 - - - lo0 16384 fe80::1%lo0 fe80::1 0 - - 0 - - - lo0 16384 your-net localhost 0 - - 0 - - - ------------------------------------------------------------------------ netstat -anr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.200.11 UGS 0 560 em0 127.0.0.1 link#3 UH 0 0 lo0 192.168.200.0/24 link#1 U 0 25 em0 192.168.200.4 link#1 UHS 0 0 lo0 Internet6: Destination Gateway Flags Netif Expire ::/96 ::1 UGRS lo0 ::1 link#3 UH lo0 ::ffff:0.0.0.0/96 ::1 UGRS lo0 fe80::/10 ::1 UGRS lo0 fe80::%em0/64 link#1 U em0 fe80::230:48ff:fef2:299c%em0 link#1 UHS lo0 fe80::%lo0/64 link#3 U lo0 fe80::1%lo0 link#3 UHS lo0 ff01::%em0/32 fe80::230:48ff:fef2:299c%em0 U em0 ff01::%lo0/32 ::1 U lo0 ff02::/16 ::1 UGRS lo0 ff02::%em0/32 fe80::230:48ff:fef2:299c%em0 U em0 ff02::%lo0/32 ::1 U lo0 ------------------------------------------------------------------------ netstat -anA Active Internet connections (including servers) Tcpcb Proto Recv-Q Send-Q Local Address Foreign Address (state) fffffe01c28423f8 tcp4 0 1008 192.168.200.4.22 192.147.25.65.4400 ESTABLISHED fffffe01038a5000 tcp4 0 0 192.168.200.4.1391 198.20.8.246.80 CLOSE_WAIT fffffe01c2272be8 tcp4 0 0 127.0.0.1.31416 *.* LISTEN fffffe01d5855be8 tcp4 0 0 192.168.200.4.1551 192.147.25.65.22 ESTABLISHED fffffe01c28427f0 tcp4 0 0 *.9101 *.* LISTEN fffffe010370f7f0 tcp4 0 0 *.22 *.* LISTEN fffffe010370fbe8 tcp6 0 0 *.22 *.* LISTEN fffffe010381dbe8 tcp4 0 0 *.9102 *.* LISTEN fffffe01c2273000 tcp4 0 0 *.9103 *.* LISTEN fffffe0103a3bbe8 tcp4 0 0 *.631 *.* LISTEN fffffe0103a3c000 tcp6 0 0 *.631 *.* LISTEN fffffe010381a000 tcp4 0 0 127.0.0.1.587 *.* LISTEN fffffe010381a3f8 tcp4 0 0 127.0.0.1.25 *.* LISTEN fffffe010381a7f0 tcp4 0 0 192.168.200.4.587 *.* LISTEN fffffe010381abe8 tcp4 0 0 192.168.200.4.25 *.* LISTEN fffffe010370e7f0 tcp4 0 0 127.0.0.1.5432 *.* LISTEN fffffe010370e000 tcp6 0 0 ::1.5432 *.* LISTEN fffffe01038a7000 tcp6 0 0 *.2049 *.* LISTEN fffffe01038a73f8 tcp4 0 0 *.2049 *.* LISTEN fffffe010381b000 tcp4 0 0 *.843 *.* LISTEN fffffe010381b3f8 tcp6 0 0 *.843 *.* LISTEN fffffe010381d3f8 tcp4 0 0 *.111 *.* LISTEN fffffe010381d7f0 tcp6 0 0 *.111 *.* LISTEN fffffe010370e3f8 tcp4 0 0 192.168.200.4.952 192.168.200.23.204 ESTABLISHED fffffe010313bdc8 udp4 0 0 *.69 *.* fffffe0103106930 udp4 0 0 *.631 *.* fffffe0103131ab8 udp6 0 0 ::1.47400 ::1.47400 fffffe0103199930 udp4 0 0 127.0.0.1.123 *.* fffffe010319a188 udp6 0 0 fe80:3::1.123 *.* fffffe010319a620 udp6 0 0 ::1.123 *.* fffffe010319a310 udp6 0 0 fe80:1::230:48ff.1 *.* fffffe010319a000 udp4 0 0 192.168.200.4.123 *.* fffffe0103199dc8 udp6 0 0 *.123 *.* fffffe0103199ab8 udp4 0 0 *.123 *.* fffffe0103199000 udp6 0 0 *.2049 *.* fffffe0103199498 udp4 0 0 *.2049 *.* fffffe0103197930 udp4 0 0 *.843 *.* fffffe0103197620 udp6 0 0 *.843 *.* fffffe0103199c40 udp6 0 0 *.* *.* fffffe010314cab8 udp4 0 0 *.1009 *.* fffffe010314c7a8 udp4 0 0 *.111 *.* fffffe010314c498 udp6 0 0 *.998 *.* fffffe010314c188 udp6 0 0 *.111 *.* fffffe0103197310 udp4 0 0 *.514 *.* fffffe0103197498 udp6 0 0 *.514 *.* Active UNIX domain sockets Address Type Recv-Q Send-Q Inode Conn Refs Nextref Addr fffffe0103a5b870 stream 0 0 fffffe01c22c71d8 0 0 0 /var/run/cups.sock fffffe010349ed20 stream 0 0 fffffe0103e66588 0 0 0 /tmp/.s.PGSQL.5432 fffffe01034a60f0 stream 0 0 fffffe0103457938 0 0 0 /var/run/rpcbind.sock fffffe01034a63c0 stream 0 0 fffffe01034981d8 0 0 0 /var/run/devd.pipe fffffe0103a5b780 dgram 0 0 0 fffffe0103517b40 0 fffffe010349b0f0 fffffe01035174b0 dgram 0 0 0 fffffe0103517c30 0 fffffe010349dc30 fffffe010349dc30 dgram 0 0 0 fffffe0103517c30 0 fffffe0103a5b960 fffffe0103a5b960 dgram 0 0 0 fffffe0103517c30 0 0 fffffe010349b0f0 dgram 0 0 0 fffffe0103517b40 0 0 fffffe0103517b40 dgram 0 0 fffffe01037f1000 0 fffffe0103a5b780 0 /var/run/logpriv fffffe0103517c30 dgram 0 0 fffffe00c0b14ce8 0 fffffe01035174b0 0 /var/run/log ------------------------------------------------------------------------ netstat -aL Current listen queue sizes (qlen/incqlen/maxqlen) Proto Listen Local Address tcp4 0/0/128 localhost.31416 tcp4 0/0/50 *.bacula-dir tcp4 0/0/128 *.ssh tcp6 0/0/128 *.ssh tcp4 0/0/50 *.bacula-fd tcp4 0/0/50 *.bacula-sd tcp4 0/0/128 *.ipp tcp6 0/0/128 *.ipp tcp4 0/0/20 localhost.submission tcp4 0/0/20 localhost.smtp tcp4 0/0/20 borg.submission tcp4 0/0/20 borg.smtp tcp4 0/0/128 localhost.postgresql tcp6 0/0/128 localhost.postgresql tcp6 0/0/5 *.nfsd tcp4 0/0/5 *.nfsd tcp4 0/0/128 *.843 tcp6 0/0/128 *.843 tcp4 0/0/128 *.sunrpc tcp6 0/0/128 *.sunrpc unix 0/0/128 /var/run/cups.sock unix 0/0/128 /tmp/.s.PGSQL.5432 unix 0/0/128 /var/run/rpcbind.sock unix 0/0/4 /var/run/devd.pipe ------------------------------------------------------------------------ fstat fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read file 13 at 0xfffffffffffffff fstat: can't read file 14 at 0x78 fstat: can't read file 16 at 0xffff fstat: can't read file 19 at 0xfffffffffffffff fstat: can't read file 20 at 0x78 fstat: can't read file 22 at 0xffff fstat: can't read file 23 at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read file 7 at 0xfffffffffffffff fstat: can't read file 8 at 0x78 fstat: can't read file 10 at 0xffff fstat: can't read file 13 at 0xfffffffffffffff fstat: can't read file 14 at 0x78 fstat: can't read file 16 at 0xffff fstat: can't read file 19 at 0xfffffffffffffff fstat: can't read file 20 at 0x78 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read file 1 at 0xfffffffffffffff fstat: can't read file 2 at 0x78 fstat: can't read file 4 at 0xffff fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 fstat: can't read znode_phys at 0x1 USER CMD PID FD MOUNT INUM MODE SZ|DV R/W root zfs 1044 root - - error - root zfs 1044 wd - - error - root zfs 1044 text - - error - root zfs 1044 0* pipe fffffe00c0d6f000 <-> fffffe00c0d6f160 65224 rw root zfs 1044 6* pipe fffffe00c0d97730 <-> fffffe00c0d975d0 0 rw root csh 1042 root - - error - root csh 1042 wd - - error - root csh 1042 text - - error - root sshd 1040 root - - error - root sshd 1040 wd - - error - root sshd 1040 text - - error - root sshd 1040 0 /dev 28 crw-rw-rw- null rw root sshd 1040 6 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1038 root - - error - boinc wcg_hpf2_rosetta_6 1038 wd - - error - boinc wcg_hpf2_rosetta_6 1038 text - - error - boinc wcg_hpf2_rosetta_6 1038 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1038 6 - - error - boinc wcg_hpf2_rosetta_6 1037 root - - error - boinc wcg_hpf2_rosetta_6 1037 wd - - error - boinc wcg_hpf2_rosetta_6 1037 text - - error - boinc wcg_hpf2_rosetta_6 1037 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1037 6 - - error - boinc wcg_hpf2_rosetta_6 1036 root - - error - boinc wcg_hpf2_rosetta_6 1036 wd - - error - boinc wcg_hpf2_rosetta_6 1036 text - - error - boinc wcg_hpf2_rosetta_6 1036 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1036 6 - - error - boinc wcg_hpf2_rosetta_6 1035 root - - error - boinc wcg_hpf2_rosetta_6 1035 wd - - error - boinc wcg_hpf2_rosetta_6 1035 text - - error - boinc wcg_hpf2_rosetta_6 1035 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1035 6 - - error - boinc wcgrid_cep2_6.40_i 1034 root - - error - boinc wcgrid_cep2_6.40_i 1034 wd - - error - boinc wcgrid_cep2_6.40_i 1034 text - - error - boinc wcgrid_cep2_6.40_i 1034 0 /dev 28 crw-rw-rw- null rw boinc wcgrid_cep2_6.40_i 1034 6 - - error - boinc wcgrid_cep2_6.40_i 1033 root - - error - boinc wcgrid_cep2_6.40_i 1033 wd - - error - boinc wcgrid_cep2_6.40_i 1033 text - - error - boinc wcgrid_cep2_6.40_i 1033 0 /dev 28 crw-rw-rw- null rw boinc wcgrid_cep2_6.40_i 1033 6 - - error - boinc setiathome-6.12.am 1032 root - - error - boinc setiathome-6.12.am 1032 wd - - error - boinc setiathome-6.12.am 1032 text - - error - boinc setiathome-6.12.am 1032 0 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1032 6 /dev 28 crw-rw-rw- null rw boinc wcgrid_faah_7.15_i 1031 root - - error - boinc wcgrid_faah_7.15_i 1031 wd - - error - boinc wcgrid_faah_7.15_i 1031 text - - error - boinc wcgrid_faah_7.15_i 1031 0 /dev 28 crw-rw-rw- null rw boinc wcgrid_faah_7.15_i 1031 6 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1030 root - - error - boinc setiathome-6.12.am 1030 wd - - error - boinc setiathome-6.12.am 1030 text - - error - boinc setiathome-6.12.am 1030 0 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1030 6 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1029 root - - error - boinc setiathome-6.12.am 1029 wd - - error - boinc setiathome-6.12.am 1029 text - - error - boinc setiathome-6.12.am 1029 0 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1029 6 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1028 root - - error - boinc setiathome-6.12.am 1028 wd - - error - boinc setiathome-6.12.am 1028 text - - error - boinc setiathome-6.12.am 1028 0 /dev 28 crw-rw-rw- null rw boinc setiathome-6.12.am 1028 6 /dev 28 crw-rw-rw- null rw boinc wcgrid_cep2_6.40_i 1027 root - - error - boinc wcgrid_cep2_6.40_i 1027 wd - - error - boinc wcgrid_cep2_6.40_i 1027 text - - error - boinc wcgrid_cep2_6.40_i 1027 0 /dev 28 crw-rw-rw- null rw boinc wcgrid_cep2_6.40_i 1027 6 - - error - boinc wcgrid_faah_7.15_i 1026 root - - error - boinc wcgrid_faah_7.15_i 1026 wd - - error - boinc wcgrid_faah_7.15_i 1026 text - - error - boinc wcgrid_faah_7.15_i 1026 0 /dev 28 crw-rw-rw- null rw boinc wcgrid_faah_7.15_i 1026 6 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1025 root - - error - boinc wcg_hpf2_rosetta_6 1025 wd - - error - boinc wcg_hpf2_rosetta_6 1025 text - - error - boinc wcg_hpf2_rosetta_6 1025 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1025 6 - - error - boinc wcg_hpf2_rosetta_6 1024 root - - error - boinc wcg_hpf2_rosetta_6 1024 wd - - error - boinc wcg_hpf2_rosetta_6 1024 text - - error - boinc wcg_hpf2_rosetta_6 1024 0 /dev 28 crw-rw-rw- null rw boinc wcg_hpf2_rosetta_6 1024 6 - - error - root ssh 1022 root - - error - root ssh 1022 wd - - error - root ssh 1022 text - - error - root ssh 1022 ctty /dev 71 crw------- ttyv0 rw root ssh 1022 0 /dev 71 crw------- ttyv0 rw root ssh 1022 6 /dev 71 crw------- ttyv0 rw root sh 1020 root - - error - root sh 1020 wd - - error - root sh 1020 text - - error - root sh 1020 ctty /dev 71 crw------- ttyv0 rw root sh 1020 0 /dev 71 crw------- ttyv0 rw root sh 1020 6 /dev 71 crw------- ttyv0 rw root getty 1017 root - - error - root getty 1017 wd - - error - root getty 1017 text - - error - root getty 1017 ctty /dev 78 crw------- ttyv7 rw root getty 1017 0 /dev 78 crw------- ttyv7 rw root getty 1016 root - - error - root getty 1016 wd - - error - root getty 1016 text - - error - root getty 1016 ctty /dev 77 crw------- ttyv6 rw root getty 1016 0 /dev 77 crw------- ttyv6 rw root getty 1015 root - - error - root getty 1015 wd - - error - root getty 1015 text - - error - root getty 1015 ctty /dev 76 crw------- ttyv5 rw root getty 1015 0 /dev 76 crw------- ttyv5 rw root getty 1014 root - - error - root getty 1014 wd - - error - root getty 1014 text - - error - root getty 1014 ctty /dev 75 crw------- ttyv4 rw root getty 1014 0 /dev 75 crw------- ttyv4 rw root getty 1013 root - - error - root getty 1013 wd - - error - root getty 1013 text - - error - root getty 1013 ctty /dev 74 crw------- ttyv3 rw root getty 1013 0 /dev 74 crw------- ttyv3 rw root getty 1012 root - - error - root getty 1012 wd - - error - root getty 1012 text - - error - root getty 1012 ctty /dev 73 crw------- ttyv2 rw root getty 1012 0 /dev 73 crw------- ttyv2 rw root getty 1011 root - - error - root getty 1011 wd - - error - root getty 1011 text - - error - root getty 1011 ctty /dev 72 crw------- ttyv1 rw root getty 1011 0 /dev 72 crw------- ttyv1 rw root login 1010 root - - error - root login 1010 wd - - error - root login 1010 text - - error - root login 1010 ctty /dev 71 crw------- ttyv0 rw root login 1010 0 /dev 71 crw------- ttyv0 rw root inetd 990 root - - error - root inetd 990 wd - - error - root inetd 990 text - - error - root inetd 990 0 /dev 28 crw-rw-rw- null rw root inetd 990 6 /dev 28 crw-rw-rw- null rw root cron 969 root - - error - root cron 969 wd - - error - root cron 969 text - - error - root cron 969 0 /dev 28 crw-rw-rw- null rw root sshd 959 root - - error - root sshd 959 wd - - error - root sshd 959 text - - error - root sshd 959 0 /dev 28 crw-rw-rw- null rw bacula bacula-dir 954 root - - error - bacula bacula-dir 954 wd - - error - bacula bacula-dir 954 text - - error - bacula bacula-dir 954 0 /dev 28 crw-rw-rw- null r root bacula-fd 951 root - - error - root bacula-fd 951 wd - - error - root bacula-fd 951 text - - error - root bacula-fd 951 0 /dev 28 crw-rw-rw- null r bacula bacula-sd 948 root - - error - bacula bacula-sd 948 wd - - error - bacula bacula-sd 948 text - - error - bacula bacula-sd 948 0 /dev 28 crw-rw-rw- null r boinc boinc_client 944 root - - error - boinc boinc_client 944 wd - - error - boinc boinc_client 944 text - - error - boinc boinc_client 944 0 /dev 28 crw-rw-rw- null rw boinc boinc_client 944 6 - - error - root cupsd 938 root - - error - root cupsd 938 wd - - error - root cupsd 938 text - - error - root cupsd 938 0 /dev 28 crw-rw-rw- null r root cupsd 938 6 /dev 28 crw-rw-rw- null w root cupsd 938 12 /dev 28 crw-rw-rw- null w mailnull exim-4.80.1-2 928 root - - error - mailnull exim-4.80.1-2 928 wd - - error - mailnull exim-4.80.1-2 928 text - - error - mailnull exim-4.80.1-2 928 0 /dev 28 crw-rw-rw- null rw mailnull exim-4.80.1-2 928 6 /dev 28 crw-rw-rw- null rw pgsql postgres 906 root - - error - pgsql postgres 906 wd - - error - pgsql postgres 906 text - - error - pgsql postgres 906 0 /dev 28 crw-rw-rw- null r pgsql postgres 906 6 - - error - pgsql postgres 905 root - - error - pgsql postgres 905 wd - - error - pgsql postgres 905 text - - error - pgsql postgres 905 0 /dev 28 crw-rw-rw- null r pgsql postgres 905 6 - - error - pgsql postgres 904 root - - error - pgsql postgres 904 wd - - error - pgsql postgres 904 text - - error - pgsql postgres 904 0 /dev 28 crw-rw-rw- null r pgsql postgres 904 6 - - error - pgsql postgres 903 root - - error - pgsql postgres 903 wd - - error - pgsql postgres 903 text - - error - pgsql postgres 903 0 /dev 28 crw-rw-rw- null r pgsql postgres 903 6 - - error - pgsql postgres 899 root - - error - pgsql postgres 899 wd - - error - pgsql postgres 899 text - - error - pgsql postgres 899 0 /dev 28 crw-rw-rw- null r pgsql postgres 899 6 - - error - root smartd 890 root - - error - root smartd 890 wd - - error - root smartd 890 text - - error - root smartd 890 0 /dev 28 crw-rw-rw- null rw root perl 886 root - - error - root perl 886 wd - - error - root perl 886 text - - error - root perl 886 0 - - error - root ng_queue 874 root - - error - root ng_queue 874 wd - - error - root ntpd 853 root - - error - root ntpd 853 wd - - error - root ntpd 853 text - - error - root ntpd 853 0 /dev 28 crw-rw-rw- null rw root ntpd 853 6 /dev 28 crw-rw-rw- null rw root ntpd 853 12 /dev 28 crw-rw-rw- null rw root ntpd 853 18* local dgram fffffe010349b0f0 <-> fffffe0103517b40 root nfsd 817 root - - error - root nfsd 817 wd - - error - root nfsd 817 text - - error - root nfsd 817 0 /dev 28 crw-rw-rw- null rw root nfsd 815 root - - error - root nfsd 815 wd - - error - root nfsd 815 text - - error - root nfsd 815 0 /dev 28 crw-rw-rw- null rw root nfsd 815 6 /dev 28 crw-rw-rw- null rw root mountd 809 root - - error - root mountd 809 wd - - error - root mountd 809 text - - error - root mountd 809 0 /dev 28 crw-rw-rw- null rw root mountd 809 6 /dev 28 crw-rw-rw- null rw root rpcbind 774 root - - error - root rpcbind 774 wd - - error - root rpcbind 774 text - - error - root rpcbind 774 0 /dev 28 crw-rw-rw- null rw root rpcbind 774 6 /dev 28 crw-rw-rw- null rw root watchdogd 760 root - - error - root watchdogd 760 wd - - error - root watchdogd 760 text - - error - root watchdogd 760 0 /dev 28 crw-rw-rw- null rw root syslogd 757 root - - error - root syslogd 757 wd - - error - root syslogd 757 text - - error - root syslogd 757 0 /dev 28 crw-rw-rw- null rw root syslogd 757 6 /dev 28 crw-rw-rw- null rw root syslogd 757 12 /dev 28 crw-rw-rw- null rw root syslogd 757 18 - - error - root devd 632 root - - error - root devd 632 wd - - error - root devd 632 text - - error - root devd 632 0 /dev 28 crw-rw-rw- null rw root init 1 root - - error - root init 1 wd - - error - root init 1 text - - error - root kernel 0 root - - error - root kernel 0 wd - - error - ------------------------------------------------------------------------ dmesg Copyright (c) 1992-2013 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 10.0-CURRENT #129 r248695: Mon Mar 25 05:03:32 CDT 2013 root@borg.lerctr.org:/usr/obj/usr/src/sys/BORG-DTRACE amd64 FreeBSD clang version 3.2 (tags/RELEASE_32/final 170710) 20121221 CPU: Intel(R) Xeon(R) CPU E5410 @ 2.33GHz (2327.54-MHz K8-class CPU) Origin = "GenuineIntel" Id = 0x10676 Family = 0x6 Model = 0x17 Stepping = 6 Features=0xbfebfbff Features2=0xce3bd AMD Features=0x20100800 AMD Features2=0x1 TSC: P-state invariant, performance statistics real memory = 68719476736 (65536 MB) avail memory = 63193878528 (60266 MB) Event timer "LAPIC" quality 400 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 8 CPUs FreeBSD/SMP: 2 package(s) x 4 core(s) cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 1 cpu2 (AP): APIC ID: 2 cpu3 (AP): APIC ID: 3 cpu4 (AP): APIC ID: 4 cpu5 (AP): APIC ID: 5 cpu6 (AP): APIC ID: 6 cpu7 (AP): APIC ID: 7 ioapic0 irqs 0-23 on motherboard ioapic1 irqs 24-47 on motherboard kbd1 at kbdmux0 netmap: loaded module cryptosoft0: on motherboard acpi0: on motherboard acpi0: Power Button (fixed) unknown: I/O range not supported cpu0: on acpi0 cpu1: on acpi0 cpu2: on acpi0 cpu3: on acpi0 cpu4: on acpi0 cpu5: on acpi0 cpu6: on acpi0 cpu7: on acpi0 hpet0: iomem 0xfed00000-0xfed003ff irq 0,8 on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 350 Event timer "HPET1" frequency 14318180 Hz quality 340 Event timer "HPET2" frequency 14318180 Hz quality 340 atrtc0: port 0x70-0x71 on acpi0 Event timer "RTC" frequency 32768 Hz quality 0 attimer0: port 0x40-0x43,0x50-0x53 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x1008-0x100b on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pci0: on pcib0 pcib1: at device 2.0 on pci0 pci1: on pcib1 pcib2: irq 16 at device 0.0 on pci1 pci2: on pcib2 pcib3: irq 16 at device 0.0 on pci2 pci3: on pcib3 pcib4: at device 0.0 on pci3 pci4: on pcib4 pcib5: at device 0.2 on pci3 pci5: on pcib5 pcib6: irq 18 at device 2.0 on pci2 pci6: on pcib6 em0: port 0x2000-0x201f mem 0xd8020000-0xd803ffff,0xd8000000-0xd801ffff irq 18 at device 0.0 on pci6 em0: Using an MSI interrupt em0: Ethernet address: 00:30:48:f2:29:9c 001.000008 netmap_attach [1680] success for em0 em1: port 0x2020-0x203f mem 0xd8060000-0xd807ffff,0xd8040000-0xd805ffff irq 19 at device 0.1 on pci6 em1: Using an MSI interrupt em1: Ethernet address: 00:30:48:f2:29:9d 001.000009 netmap_attach [1680] success for em1 pcib7: at device 0.3 on pci1 pci7: on pcib7 pcib8: at device 4.0 on pci0 pci8: on pcib8 pcib9: at device 6.0 on pci0 pci9: on pcib9 pci0: at device 8.0 (no driver attached) pcib10: irq 17 at device 28.0 on pci0 pci10: on pcib10 uhci0: port 0x1800-0x181f irq 17 at device 29.0 on pci0 usbus0 on uhci0 uhci1: port 0x1820-0x183f irq 19 at device 29.1 on pci0 usbus1 on uhci1 uhci2: port 0x1840-0x185f irq 18 at device 29.2 on pci0 usbus2 on uhci2 ehci0: mem 0xd8500400-0xd85007ff irq 17 at device 29.7 on pci0 usbus3: EHCI version 1.0 usbus3 on ehci0 pcib11: at device 30.0 on pci0 pci11: on pcib11 vgapci0: port 0x3000-0x30ff mem 0xd0000000-0xd7ffffff,0xd8200000-0xd820ffff irq 18 at device 1.0 on pci11 isab0: at device 31.0 on pci0 isa0: on isab0 atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1860-0x186f at device 31.1 on pci0 ata0: at channel 0 on atapci0 ahci0: port 0x18a0-0x18a7,0x1874-0x1877,0x1878-0x187f,0x1870-0x1873,0x1880-0x189f mem 0xd8500800-0xd8500bff irq 19 at device 31.2 on pci0 ahci0: AHCI v1.10 with 6 3Gbps ports, Port Multiplier supported ahcich0: at channel 0 on ahci0 ahcich1: at channel 1 on ahci0 ahcich2: at channel 2 on ahci0 ahcich3: at channel 3 on ahci0 ahcich4: at channel 4 on ahci0 ahcich5: at channel 5 on ahci0 ichsmb0: port 0x1100-0x111f irq 19 at device 31.3 on pci0 smbus0: on ichsmb0 acpi_button0: on acpi0 atkbdc0: port 0x60,0x64 irq 1 on acpi0 atkbd0: irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0 fdc0: port 0x3f0-0x3f5,0x3f7 irq 6 drq 2 on acpi0 fd0: <1440-KB 3.5" drive> on fdc0 drive 0 ppc0: port 0x378-0x37f,0x778-0x77f irq 7 drq 3 on acpi0 ppc0: SMC-like chipset (ECP/EPP/PS2/NIBBLE) in COMPATIBLE mode ppc0: FIFO with 16/16/9 bytes threshold ppbus0: on ppc0 lpt0: on ppbus0 lpt0: Interrupt-driven port ppi0: on ppbus0 ichwd0 on isa0 orm0: at iomem 0xc0000-0xcafff on isa0 sc0: at flags 0x100 on isa0 sc0: VGA <16 virtual consoles, flags=0x300> vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0 coretemp0: on cpu0 est0: on cpu0 p4tcc0: on cpu0 coretemp1: on cpu1 est1: on cpu1 p4tcc1: on cpu1 coretemp2: on cpu2 est2: on cpu2 p4tcc2: on cpu2 coretemp3: on cpu3 est3: on cpu3 p4tcc3: on cpu3 coretemp4: on cpu4 est4: on cpu4 p4tcc4: on cpu4 coretemp5: on cpu5 est5: on cpu5 p4tcc5: on cpu5 coretemp6: on cpu6 est6: on cpu6 p4tcc6: on cpu6 coretemp7: on cpu7 est7: on cpu7 p4tcc7: on cpu7 ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Timecounters tick every 1.000 msec vboxdrv: fAsync=0 offMin=0x40c offMax=0x58d usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 480Mbps High Speed USB v2.0 ugen1.1: at usbus1 uhub0: on usbus1 ugen0.1: at usbus0 uhub1: on usbus0 ugen3.1: at usbus3 uhub2: on usbus3 ugen2.1: at usbus2 uhub3: on usbus2 ata0: DMA limited to UDMA33, controller found non-ATA66 cable ada0 at ahcich0 bus 0 scbus1 target 0 lun 0 ada0: ATA-8 SATA 3.x device ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada0: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada0: Previously was known as ad4 ada1 at ahcich1 bus 0 scbus2 target 0 lun 0 ada1: ATA-8 SATA 3.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada1: Previously was known as ad6 ada2 at ahcich2 bus 0 scbus3 target 0 lun 0 ada2: ATA-8 SATA 3.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad8 ada3 at ahcich3 bus 0 scbus4 target 0 lun 0 ada3: ATA-8 SATA 3.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad10 ada4 at ahcich4 bus 0 scbus5 target 0 lun 0 ada4: ATA-8 SATA 3.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada4: Previously was known as ad12 ada5 at ahcich5 bus 0 scbus6 target 0 lun 0 ada5: ATA-8 SATA 3.x device ada5: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada5: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada5: Previously was known as ad14 cd0 at ata0 bus 0 scbus0 target 0 lun 0 cd0: Removable CD-ROM SCSI-0 device cd0: 33.300MB/s transfers (UDMA2, ATAPI 12bytes, PIO 65534bytes) cd0: Attempt to query device size failed: NOT READY, Medium not present SMP: AP CPU #7 Launched! SMP: AP CPU #1 Launched! SMP: AP CPU #2 Launched! SMP: AP CPU #3 Launched! SMP: AP CPU #4 Launched! SMP: AP CPU #6 Launched! SMP: AP CPU #5 Launched! uhub1: 2 ports with 2 removable, self powered uhub0: 2 ports with 2 removable, self powered uhub3: 2 ports with 2 removable, self powered Root mount waiting for: usbus3 uhub2: 6 ports with 6 removable, self powered Root mount waiting for: usbus3 ugen3.2: at usbus3 ukbd0: on usbus3 kbd2 at ukbd0 Trying to mount root from zfs:zroot/ROOT/default []... ffclock reset: HPET (14318180 Hz), time = 1364219053.500000000 Setting hostuuid: 53d19f64-d663-a017-8922-0030488e9ff3. Setting hostid: 0xf53a926e. Entropy harvesting: interrupts ethernet point_to_point kickstart. Starting file system checks: Mounting local file systems:. Writing entropy file:. Setting hostname: borg.lerctr.org. Starting Network: lo0 em0 em1. lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3 inet 127.0.0.1 netmask 0xff000000 nd6 options=21 em0: flags=8843 metric 0 mtu 1500 options=4219b ether 00:30:48:f2:29:9c inet 192.168.200.4 netmask 0xffffff00 broadcast 192.168.200.255 inet6 fe80::230:48ff:fef2:299c%em0 prefixlen 64 scopeid 0x1 nd6 options=29 media: Ethernet autoselect status: no carrier em1: flags=8c02 metric 0 mtu 1500 options=4219b ether 00:30:48:f2:29:9d nd6 options=29 media: Ethernet autoselect status: no carrier Starting devd. Starting Network: em1. em1: flags=8c02 metric 0 mtu 1500 options=4219b ether 00:30:48:f2:29:9d nd6 options=29 media: Ethernet autoselect status: no carrier uhid0: on usbus3 add net default: gateway 192.168.200.11 add net ::ffff:0.0.0.0: gateway ::1 add net ::0.0.0.0: gateway ::1 add net fe80::: gateway ::1 add net ff02::: gateway ::1 Mounting NFS file systems:. ELF ldconfig path: /lib /usr/lib /usr/lib/compat /usr/local/lib /usr/local/kde4/lib /usr/local/lib/compat /usr/local/lib/dbmail /usr/local/lib/event2 /usr/local/lib/pth /usr/local/lib/qt4 /usr/local/lib/virtualbox 32-bit compatibility ldconfig path: /usr/lib32 /usr/local/lib32/compat Creating and/or trimming log files. Starting syslogd. Starting watchdogd. No core dumps found. Additional ABI support: linux. Starting rpcbind. NFS access cache time=60 Clearing /tmp (X related). Starting mountd. NFSv4 is disabled Starting nfsd. Updating motd:. Starting ntpd. Starting sshblock. Starting smartd. Updating cpucodes... /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl0 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl1 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl2 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl3 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl4 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl5 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl6 from rev 0x60c to rev 0x60f... done. /usr/local/share/cpucontrol/m401067660F.fw: updating cpu /dev/cpuctl7 from rev 0x60c to rev 0x60f... done. Done. Starting exim. Starting cupsd. Starting boinc_client. Starting bacula_sd. Starting bacula_fd. Starting bacula_dir. Performing sanity check on sshd configuration. Starting sshd. Configuring syscons: blanktime. Starting cron. Starting inetd. Starting background file system checks in 60 seconds. Mon Mar 25 08:44:56 CDT 2013 Mar 25 08:45:03 borg login: ROOT LOGIN (root) ON ttyv0 Fatal trap 12: page fault while in kernel mode cpuid = 0; apic id = 00 fault virtual address = 0x378 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff80531426 stack pointer = 0x28:0xffffff91579193d0 frame pointer = 0x28:0xffffff9157919470 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 1044 (zfs) trap number = 12 panic: page fault cpuid = 0 Uptime: 2m10s Dumping 4913 out of 64747 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% ------------------------------------------------------------------------ kernel config options CONFIG_AUTOGENERATED ident BORG-DTRACE machine amd64 cpu HAMMER makeoptions WITH_CTF=1 makeoptions DEBUG=-g options FFCLOCK options USB_DEBUG options RDRAND_RNG options PADLOCK_RNG options ATH_ENABLE_11N options AH_AR5416_INTERRUPT_MITIGATION options AH_SUPPORT_AR5416 options IEEE80211_SUPPORT_MESH options IEEE80211_AMPDU_AGE options IEEE80211_DEBUG options SC_PIXEL_MODE options VESA options CTL_DISABLE options AHD_REG_PRETTY_PRINT options AHC_REG_PRETTY_PRINT options ATA_STATIC_ID options ATA_CAM options SMP options MALLOC_DEBUG_MAXZONES=8 options INCLUDE_CONFIG_FILE options DDB_CTF options KDTRACE_HOOKS options KDTRACE_FRAME options MAC options CAPABILITIES options CAPABILITY_MODE options AUDIT options HWPMC_HOOKS options KBD_INSTALL_CDEV options PRINTF_BUFR_SIZE=128 options _KPOSIX_PRIORITY_SCHEDULING options SYSVSEM options SYSVMSG options SYSVSHM options STACK options KTRACE options SCSI_DELAY=5000 options COMPAT_FREEBSD7 options COMPAT_FREEBSD6 options COMPAT_FREEBSD5 options COMPAT_FREEBSD4 options COMPAT_FREEBSD32 options GEOM_LABEL options GEOM_RAID options GEOM_PART_GPT options PSEUDOFS options PROCFS options CD9660 options MSDOSFS options NFS_ROOT options NFSLOCKD options NFSD options NFSCL options QUOTA options SCTP options TCP_OFFLOAD options INET6 options INET options PREEMPTION options SCHED_ULE options NEW_PCIB options GEOM_PART_MBR options GEOM_PART_EBR_COMPAT options GEOM_PART_EBR options GEOM_PART_BSD device isa device mem device io device uart_ns8250 device cpufreq device acpi device pci device fdc device ahci device ata device esp device isci device scbus device ch device da device sa device cd device pass device ses device ctl device atkbdc device atkbd device psm device kbdmux device vga device splash device sc device agp device uart device ppc device ppbus device lpt device ppi device em device miibus device cas device gem device hme device nfe device nge device loop device random device ether device vlan device tun device md device gif device faith device firmware device bpf device uhci device ohci device ehci device xhci device usb device ukbd device umass device virtio device virtio_pci device vtnet device virtio_blk device virtio_scsi device virtio_balloon device netmap ------------------------------------------------------------------------ ddb capture buffer ddb: ddb_capture: kvm_nlist -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 512-248-2683 E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893 _______________________________________________ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 19:57:54 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8648E977; Mon, 1 Apr 2013 19:57:54 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo07.seg.att.com (nbfkord-smmo07.seg.att.com [209.65.160.93]) by mx1.freebsd.org (Postfix) with ESMTP id 2DEE2D6E; Mon, 1 Apr 2013 19:57:53 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO nbfkord-smmo07.seg.att.com) by nbfkord-smmo07.seg.att.com(mxl_mta-6.15.0-1) with ESMTP id 2c6e9515.2aaabd364940.287251.00-560.799818.nbfkord-smmo07.seg.att.com (envelope-from ); Mon, 01 Apr 2013 19:57:54 +0000 (UTC) X-MXL-Hash: 5159e6c25788321d-76165254320783176bda1bc0752e58ef2c46c589 Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo07.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id bb6e9515.0.287219.00-498.799711.nbfkord-smmo07.seg.att.com (envelope-from ); Mon, 01 Apr 2013 19:57:53 +0000 (UTC) X-MXL-Hash: 5159e6c12d90d3fd-e35b62c4bcd917a550f9ca96bf21b7fd695920ed Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r31Jvk1F014019; Mon, 1 Apr 2013 15:57:47 -0400 Received: from alpi133.aldc.att.com (alpi133.aldc.att.com [130.8.217.3]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r31JvgMZ013927 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 1 Apr 2013 15:57:45 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi133.aldc.att.com (RSA Interceptor); Mon, 1 Apr 2013 20:57:26 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r31JvPqW014648; Mon, 1 Apr 2013 15:57:26 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r31JvKjh014501; Mon, 1 Apr 2013 15:57:21 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id 2CD5568078A; Mon, 1 Apr 2013 15:57:20 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <20825.59038.104304.161698@oz.mt.att.com> Date: Mon, 1 Apr 2013 15:57:18 -0400 From: Jay Borkenhagen To: fs@freebsd.org Subject: mounting failed with error 2 X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 19:57:54 -0000 Hi FS, I am attempting to follow Niclas Zeising's updated directions at https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE for "Installing FreeBSD 9.0-RELEASE (or later) Root on ZFS using GPT." I have following those instructions verbatim twice now, both times winding up with the same error. Niclas suggested asking here. After I attempt to reboot into my brand new system, I find myself with this written on my unresponsive console: =============== Timecounter "TSC-low" frequency 8854667 Hz quality 1000 Root mount waiting for usbus6 usbus2 uhub6: 6 ports with 6 removable, self powered uhub2: 6 ports with 6 removable, self powered Trying to mount root from zfs:zroot/ROOT []... Mounting from zfs:zroot/ROOT failed with error 2. Loader variables: vfs.root.mountfrom=zfs:zroot/ROOT Manual root filesystem specification: : [options] Mount using filesystem and with the specified (optional) option list. eg. ufs:/dev/da0s1a zfs:tank cd9660:/dev/acd0 ro (which is equivalent to: mount -t cd9660 -o ro /dev/acd0 /) ? List valid disk boot devices . Yield 1 second (for background tasks) Abort manual input mountroot> =============== Thanks! Jay B. From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 20:33:48 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9E68C8E8; Mon, 1 Apr 2013 20:33:48 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [176.9.45.25]) by mx1.freebsd.org (Postfix) with ESMTP id 61904F24; Mon, 1 Apr 2013 20:33:48 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id 4FE9F33B17; Mon, 1 Apr 2013 22:33:47 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id cNUY_opO7ciw; Mon, 1 Apr 2013 22:33:45 +0200 (CEST) Received: from [10.9.8.1] (chello085216226145.chello.sk [85.216.226.145]) by mail.vx.sk (Postfix) with ESMTPSA id 5527333B0E; Mon, 1 Apr 2013 22:33:44 +0200 (CEST) Message-ID: <5159EF29.6000503@FreeBSD.org> Date: Mon, 01 Apr 2013 22:33:45 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Larry Rosenman Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT References: In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 20:33:48 -0000 This error seems to be limited to sending deduplicated streams. Does sending without "-D" work ok? This might be a vendor error as well. On 1.4.2013 20:05, Larry Rosenman wrote: > Re-Sending. Any ideas, guys/gals? > > This really gets in my way. > -- Martin Matuska FreeBSD committer http://blog.vx.sk From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 20:40:47 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id DA08FE27; Mon, 1 Apr 2013 20:40:47 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id EAE26F85; Mon, 1 Apr 2013 20:40:46 +0000 (UTC) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id XAA28549; Mon, 01 Apr 2013 23:40:43 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1UMlX1-0006Us-5q; Mon, 01 Apr 2013 23:40:43 +0300 Message-ID: <5159F0C9.9000302@FreeBSD.org> Date: Mon, 01 Apr 2013 23:40:41 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130321 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jay Borkenhagen Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> In-Reply-To: <20825.59038.104304.161698@oz.mt.att.com> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 20:40:47 -0000 on 01/04/2013 22:57 Jay Borkenhagen said the following: > Hi FS, > > I am attempting to follow Niclas Zeising's updated directions at > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE for > "Installing FreeBSD 9.0-RELEASE (or later) Root on ZFS using GPT." It doesn't appear that you've mentioned what version of FreeBSD you are using. > I have following those instructions verbatim twice now, both times > winding up with the same error. Niclas suggested asking here. > > After I attempt to reboot into my brand new system, I find myself with > this written on my unresponsive console: > > > =============== > Timecounter "TSC-low" frequency 8854667 Hz quality 1000 > Root mount waiting for usbus6 usbus2 > uhub6: 6 ports with 6 removable, self powered > uhub2: 6 ports with 6 removable, self powered > Trying to mount root from zfs:zroot/ROOT []... > Mounting from zfs:zroot/ROOT failed with error 2. > > Loader variables: > vfs.root.mountfrom=zfs:zroot/ROOT > > Manual root filesystem specification: > : [options] > Mount using filesystem > and with the specified (optional) option list. > > eg. ufs:/dev/da0s1a > zfs:tank > cd9660:/dev/acd0 ro > (which is equivalent to: mount -t cd9660 -o ro /dev/acd0 /) > > ? List valid disk boot devices > . Yield 1 second (for background tasks) > Abort manual input -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Mon Apr 1 20:52:33 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 619D6540; Mon, 1 Apr 2013 20:52:33 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo07.seg.att.com (nbfkord-smmo07.seg.att.com [209.65.160.93]) by mx1.freebsd.org (Postfix) with ESMTP id ED4126D; Mon, 1 Apr 2013 20:52:32 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo07.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id 093f9515.0.315428.00-431.878431.nbfkord-smmo07.seg.att.com (envelope-from ); Mon, 01 Apr 2013 20:52:33 +0000 (UTC) X-MXL-Hash: 5159f39154d110e9-f7b1bd327c254c94ad87abde4fbf687911ed9c2e Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r31KqVlT001439; Mon, 1 Apr 2013 16:52:32 -0400 Received: from alpi131.aldc.att.com (alpi131.aldc.att.com [130.8.218.69]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r31KqQwP001386 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 1 Apr 2013 16:52:28 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi131.aldc.att.com (RSA Interceptor); Mon, 1 Apr 2013 21:52:12 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r31KqCJn028815; Mon, 1 Apr 2013 16:52:12 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r31KqAg9028753; Mon, 1 Apr 2013 16:52:10 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id E47D368078A; Mon, 1 Apr 2013 16:52:09 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <20825.62329.379765.344231@oz.mt.att.com> Date: Mon, 1 Apr 2013 16:52:09 -0400 From: Jay Borkenhagen To: Andriy Gapon Subject: Re: mounting failed with error 2 In-Reply-To: <5159F0C9.9000302@FreeBSD.org> References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Apr 2013 20:52:33 -0000 Andriy Gapon writes: > on 01/04/2013 22:57 Jay Borkenhagen said the following: > > Hi FS, > > > > I am attempting to follow Niclas Zeising's updated directions at > > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE for > > "Installing FreeBSD 9.0-RELEASE (or later) Root on ZFS using GPT." > > It doesn't appear that you've mentioned what version of FreeBSD you are using. Hi, I realized I had forgotten to write that as soon as I hit 'send'. :) I am using a vanilla 9.1-RELEASE memstick to install. Jay B. From owner-freebsd-fs@FreeBSD.ORG Tue Apr 2 21:26:30 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id DA622411; Tue, 2 Apr 2013 21:26:30 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:150:6101::4]) by mx1.freebsd.org (Postfix) with ESMTP id 9FF40C1A; Tue, 2 Apr 2013 21:26:30 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id 8EAD4397A; Tue, 2 Apr 2013 23:26:23 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id 1UNUI-FwLiur; Tue, 2 Apr 2013 23:26:21 +0200 (CEST) Received: from [127.0.0.1] (chello085216226145.chello.sk [85.216.226.145]) by mail.vx.sk (Postfix) with ESMTPSA id EBFB93973; Tue, 2 Apr 2013 23:26:20 +0200 (CEST) Message-ID: <515B4CFA.9080706@FreeBSD.org> Date: Tue, 02 Apr 2013 23:26:18 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Larry Rosenman Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT References: <5159EF29.6000503@FreeBSD.org> In-Reply-To: <5159EF29.6000503@FreeBSD.org> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Antivirus: avast! (VPS 130402-0, 02/04/2013), Outbound message X-Antivirus-Status: Clean Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Apr 2013 21:26:30 -0000 On 1. 4. 2013 22:33, Martin Matuska wrote: > This error seems to be limited to sending deduplicated streams. Does > sending without "-D" work ok? This might be a vendor error as well. > > On 1.4.2013 20:05, Larry Rosenman wrote: >> Re-Sending. Any ideas, guys/gals? >> >> This really gets in my way. >> This may be also related to: http://www.freebsd.org/cgi/query-pr.cgi?pr=176978 -- Martin Matuska FreeBSD committer http://blog.vx.sk From owner-freebsd-fs@FreeBSD.ORG Wed Apr 3 01:52:03 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 826032FF for ; Wed, 3 Apr 2013 01:52:02 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 60019BDE for ; Wed, 3 Apr 2013 01:52:02 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.136]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNCRl-000Pwi-AV for freebsd-fs@freebsd.org; Tue, 02 Apr 2013 20:25:09 -0500 Message-ID: <515B84E8.2090202@physics.umn.edu> Date: Tue, 02 Apr 2013 20:24:56 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_20 autolearn=no version=3.3.2 Subject: zfs home directories best practice X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Apr 2013 01:52:03 -0000 We're building a new NFS home directory server on FreeBSD with ZFS. The Solaris ZFS Best Practices docs say to create a separate filesystem for each user home directory. My instinct is to ask "Are you serious???". My gut feeling isn't entirely logical but the idea of getting 1000+ lines of output from a simple "df" just feels wrong... Can anyone comment about how well this approach actually works, specifically on FreeBSD? (we're running 9.1) Obviously it has some nice features, such as quota controls, snapshots directly available to users within their home, etc, but it leaves me concerned. I chatted with some neighbors who have a larger, Solaris-based shop, and they said that with 10,000 user home filesystems, their server could take an hour to boot (at least using the default startup scripts). They reverted to having one big shared filesystem for all, but would like to revisit the per-user approach with fewer users per server. Ours wouldn't be so large, but we could easily have around 1000 user filesystems. I haven't tested yet what effect that would have on boot time, though hope to test it over the next week. Perhaps it implies other resource usage besides the boot time issue (is there any limit to number of filesystems mounted or NFS-exported?). I wonder if anyone here has built a system along these lines and has experiences to share. Thanks for any comments, Graham -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota ------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Wed Apr 3 20:14:24 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id AB159321; Wed, 3 Apr 2013 20:14:24 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 86AE3C1F; Wed, 3 Apr 2013 20:14:24 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r33KEO9g051392; Wed, 3 Apr 2013 20:14:24 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r33KEOJ2051391; Wed, 3 Apr 2013 20:14:24 GMT (envelope-from linimon) Date: Wed, 3 Apr 2013 20:14:24 GMT Message-Id: <201304032014.r33KEOJ2051391@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/177536: [zfs] zfs livelock (deadlock) with high write-to-disk load X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Apr 2013 20:14:24 -0000 Old Synopsis: zfs livelock (deadlock) with high write-to-disk load New Synopsis: [zfs] zfs livelock (deadlock) with high write-to-disk load Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Apr 3 20:14:16 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=177536 From owner-freebsd-fs@FreeBSD.ORG Wed Apr 3 23:56:54 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7319D783 for ; Wed, 3 Apr 2013 23:56:54 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 3A5C6934 for ; Wed, 3 Apr 2013 23:56:53 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAEfAXFGDaFvO/2dsb2JhbABDDoMvgyS9QoEidIIfAQEBAwEBAQEgKyALBRYYAgINGQIpAQkmBggHBAEcBIdtBgyuDZJFgSOMPwUCfDQHgi2BEwOULYI+gR+PbIJMWyAyfQgXHg X-IronPort-AV: E=Sophos;i="4.87,404,1363147200"; d="scan'208";a="22338503" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 03 Apr 2013 19:56:47 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 1E28CB3F13; Wed, 3 Apr 2013 19:56:47 -0400 (EDT) Date: Wed, 3 Apr 2013 19:56:47 -0400 (EDT) From: Rick Macklem To: Graham Allan Message-ID: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <515B84E8.2090202@physics.umn.edu> Subject: Re: zfs home directories best practice MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE8 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Apr 2013 23:56:54 -0000 Graham Allan wrote: > We're building a new NFS home directory server on FreeBSD with ZFS. > The > Solaris ZFS Best Practices docs say to create a separate filesystem > for > each user home directory. My instinct is to ask "Are you serious???". > My > gut feeling isn't entirely logical but the idea of getting 1000+ lines > of output from a simple "df" just feels wrong... > > Can anyone comment about how well this approach actually works, > specifically on FreeBSD? (we're running 9.1) Obviously it has some > nice > features, such as quota controls, snapshots directly available to > users > within their home, etc, but it leaves me concerned. I chatted with > some > neighbors who have a larger, Solaris-based shop, and they said that > with > 10,000 user home filesystems, their server could take an hour to boot > (at least using the default startup scripts). They reverted to having > one big shared filesystem for all, but would like to revisit the > per-user approach with fewer users per server. > > Ours wouldn't be so large, but we could easily have around 1000 user > filesystems. I haven't tested yet what effect that would have on boot > time, though hope to test it over the next week. Perhaps it implies > other resource usage besides the boot time issue (is there any limit > to > number of filesystems mounted or NFS-exported?). I wonder if anyone > here > has built a system along these lines and has experiences to share. > Well, there isn't any limit to the # of exported file systems afaik, but updating a large /etc/exports file takes quite a bit of time and when you use mountd (the default) for this, you can have problems. (You either have a period of time when no client can get response from the server or a period of time when I/O fails because the file system isn't re-exported yet.) If you choose this approach, you should look seriously at using nfse (on sourceforge) instead of mountd. You might also want to contact Garrett Wollman w.r.t. the NFS server patch(es) and setup he is using, since he has been working through performance issues (relatively successfully now, as I understand) for a fairly large NFS/ZFS server. You should be able to find a thread discussing this on freebsd-fs or freebsd-current. rick > Thanks for any comments, > > Graham > -- > ------------------------------------------------------------------------- > Graham Allan > School of Physics and Astronomy - University of Minnesota > ------------------------------------------------------------------------- > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 00:16:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 26884B37 for ; Thu, 4 Apr 2013 00:16:50 +0000 (UTC) (envelope-from m.e.sanliturk@gmail.com) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx1.freebsd.org (Postfix) with ESMTP id D9504A99 for ; Thu, 4 Apr 2013 00:16:49 +0000 (UTC) Received: by mail-vc0-f180.google.com with SMTP id m17so1951948vca.11 for ; Wed, 03 Apr 2013 17:16:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=E0gO4R5r5jBRnqZ2T0GgmHLRMRUPwlCiw+Gg90ukI+M=; b=I2r2hoQVIqWPbl0QPAj3TmnNFxuZYgumS0z7pOHNCB29d7WhoVb89HwRJyWUCZNclD trLnIyNMIAQDXeQ5z7KRG+CTe+b8wCA/9hrEGIG7LbkCqqVeglTBvOqXjPDlSWOMwacj qQcrl9E9rehg35lvbd+ZBOE7Bi3+AtaDfhF9Zh/52kCjFntDUVlYGsYIQ+n4LZytsu2Z iz/iT048Uq/cyMESlQJai/a8G6HdidmGCZsC/eDgE4pG4dCVuvDtY78Xp+5PZaSw6n8q 4D5+lX/q8l44tPXrRRow68QpQqU7j3KSpij8u6WnBfk9Vwo2dYSSPR9bOD87In9xeP1h CoSg== MIME-Version: 1.0 X-Received: by 10.52.27.52 with SMTP id q20mr2747253vdg.16.1365034609180; Wed, 03 Apr 2013 17:16:49 -0700 (PDT) Received: by 10.58.132.203 with HTTP; Wed, 3 Apr 2013 17:16:49 -0700 (PDT) In-Reply-To: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> References: <515B84E8.2090202@physics.umn.edu> <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> Date: Wed, 3 Apr 2013 17:16:49 -0700 Message-ID: Subject: Re: zfs home directories best practice From: Mehmet Erol Sanliturk To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 00:16:50 -0000 On Wed, Apr 3, 2013 at 4:56 PM, Rick Macklem wrote: > Graham Allan wrote: > > We're building a new NFS home directory server on FreeBSD with ZFS. > > The > > Solaris ZFS Best Practices docs say to create a separate filesystem > > for > > each user home directory. My instinct is to ask "Are you serious???". > > My > > gut feeling isn't entirely logical but the idea of getting 1000+ lines > > of output from a simple "df" just feels wrong... > > > > Can anyone comment about how well this approach actually works, > > specifically on FreeBSD? (we're running 9.1) Obviously it has some > > nice > > features, such as quota controls, snapshots directly available to > > users > > within their home, etc, but it leaves me concerned. I chatted with > > some > > neighbors who have a larger, Solaris-based shop, and they said that > > with > > 10,000 user home filesystems, their server could take an hour to boot > > (at least using the default startup scripts). They reverted to having > > one big shared filesystem for all, but would like to revisit the > > per-user approach with fewer users per server. > > > > Ours wouldn't be so large, but we could easily have around 1000 user > > filesystems. I haven't tested yet what effect that would have on boot > > time, though hope to test it over the next week. Perhaps it implies > > other resource usage besides the boot time issue (is there any limit > > to > > number of filesystems mounted or NFS-exported?). I wonder if anyone > > here > > has built a system along these lines and has experiences to share. > > > Well, there isn't any limit to the # of exported file systems afaik, > but updating a large /etc/exports file takes quite a bit of time and > when you use mountd (the default) for this, you can have problems. > (You either have a period of time when no client can get response > from the server or a period of time when I/O fails because the > file system isn't re-exported yet.) > > If you choose this approach, you should look seriously at using > nfse (on sourceforge) instead of mountd. > > You might also want to contact Garrett Wollman w.r.t. the NFS > server patch(es) and setup he is using, since he has been > working through performance issues (relatively successfully > now, as I understand) for a fairly large NFS/ZFS server. > You should be able to find a thread discussing this on > freebsd-fs or freebsd-current. > > rick > > > Thanks for any comments, > > > > Graham > > -- > > ------------------------------------------------------------------------- > > Graham Allan > > School of Physics and Astronomy - University of Minnesota > > ------------------------------------------------------------------------- > > > >From Google , the following link is found , but it is giving error here : http://sourceforge.net/projects/nfse/ Thank you very much . Mehmet Erol Sanliturk From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 00:22:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9D487ED2 for ; Thu, 4 Apr 2013 00:22:01 +0000 (UTC) (envelope-from m.e.sanliturk@gmail.com) Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by mx1.freebsd.org (Postfix) with ESMTP id 5D6FEB1C for ; Thu, 4 Apr 2013 00:22:01 +0000 (UTC) Received: by mail-vc0-f181.google.com with SMTP id hv10so1919767vcb.40 for ; Wed, 03 Apr 2013 17:21:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=n3+4753YLb2fOGg8Gtfy+K0caXMk9TkPXSXSKhp/25s=; b=r3m6219h6Zt/K591nhq7nsvYNNF63JeyNrBXKljmiv4X8SfbdYRXBU2h1pxRMtc5we L+Aw0cxxmcsXw3Px82rEosFfBQakGnZcYFQ5nFiQfEEImWLg/aLYVCSS/045UullnwIx mvIjIDqK4nfQHBTFZFyRBLDkDfSBtn5LytXf1gWeu4CYz9mSQ1cG1ieUw07kjEVr8GcM ov7aqju1qUZrB4NMTVBdFnUW08jwfHTgUADRSTnuir0xxEd9X4NgRMuEPFKjExCJIQ5M C+ThS4tZyQjVehfWnzX8iLN++J/E4CzPm86yivrPt+Q81LxC3ES2ROyhfAaAS5bIYpg4 2eEQ== MIME-Version: 1.0 X-Received: by 10.52.27.17 with SMTP id p17mr2705824vdg.0.1365034914927; Wed, 03 Apr 2013 17:21:54 -0700 (PDT) Received: by 10.58.132.203 with HTTP; Wed, 3 Apr 2013 17:21:54 -0700 (PDT) In-Reply-To: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> References: <515B84E8.2090202@physics.umn.edu> <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> Date: Wed, 3 Apr 2013 17:21:54 -0700 Message-ID: Subject: Re: zfs home directories best practice From: Mehmet Erol Sanliturk To: Rick Macklem Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 00:22:01 -0000 On Wed, Apr 3, 2013 at 4:56 PM, Rick Macklem wrote: > Graham Allan wrote: > > We're building a new NFS home directory server on FreeBSD with ZFS. > > The > > Solaris ZFS Best Practices docs say to create a separate filesystem > > for > > each user home directory. My instinct is to ask "Are you serious???". > > My > > gut feeling isn't entirely logical but the idea of getting 1000+ lines > > of output from a simple "df" just feels wrong... > > > > Can anyone comment about how well this approach actually works, > > specifically on FreeBSD? (we're running 9.1) Obviously it has some > > nice > > features, such as quota controls, snapshots directly available to > > users > > within their home, etc, but it leaves me concerned. I chatted with > > some > > neighbors who have a larger, Solaris-based shop, and they said that > > with > > 10,000 user home filesystems, their server could take an hour to boot > > (at least using the default startup scripts). They reverted to having > > one big shared filesystem for all, but would like to revisit the > > per-user approach with fewer users per server. > > > > Ours wouldn't be so large, but we could easily have around 1000 user > > filesystems. I haven't tested yet what effect that would have on boot > > time, though hope to test it over the next week. Perhaps it implies > > other resource usage besides the boot time issue (is there any limit > > to > > number of filesystems mounted or NFS-exported?). I wonder if anyone > > here > > has built a system along these lines and has experiences to share. > > > Well, there isn't any limit to the # of exported file systems afaik, > but updating a large /etc/exports file takes quite a bit of time and > when you use mountd (the default) for this, you can have problems. > (You either have a period of time when no client can get response > from the server or a period of time when I/O fails because the > file system isn't re-exported yet.) > > If you choose this approach, you should look seriously at using > nfse (on sourceforge) instead of mountd. > > You might also want to contact Garrett Wollman w.r.t. the NFS > server patch(es) and setup he is using, since he has been > working through performance issues (relatively successfully > now, as I understand) for a fairly large NFS/ZFS server. > You should be able to find a thread discussing this on > freebsd-fs or freebsd-current. > > rick > > > Thanks for any comments, > > > > Graham > > -- > > ------------------------------------------------------------------------- > > Graham Allan > > School of Physics and Astronomy - University of Minnesota > > ------------------------------------------------------------------------- > > _______________________________________________ > > > I am sorry that , my previous message sent early . After trying some more retry , it worked : http://sourceforge.net/projects/nfse/ ( NFSE for FreeBSD NFS server ) Thank you very much . Mehmet Erol Sanliturk From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 02:31:19 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id DA774D13 for ; Thu, 4 Apr 2013 02:31:19 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id 88972FE7 for ; Thu, 4 Apr 2013 02:31:19 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id r342KWvV005394; Wed, 3 Apr 2013 21:20:32 -0500 (CDT) Date: Wed, 3 Apr 2013 21:20:32 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: Graham Allan Subject: Re: zfs home directories best practice In-Reply-To: <515B84E8.2090202@physics.umn.edu> Message-ID: References: <515B84E8.2090202@physics.umn.edu> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Wed, 03 Apr 2013 21:20:32 -0500 (CDT) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 02:31:19 -0000 On Tue, 2 Apr 2013, Graham Allan wrote: > We're building a new NFS home directory server on FreeBSD with ZFS. The > Solaris ZFS Best Practices docs say to create a separate filesystem for each > user home directory. My instinct is to ask "Are you serious???". My gut > feeling isn't entirely logical but the idea of getting 1000+ lines of output > from a simple "df" just feels wrong... > > Can anyone comment about how well this approach actually works, specifically > on FreeBSD? (we're running 9.1) Obviously it has some nice features, such as > quota controls, snapshots directly available to users within their home, etc, > but it leaves me concerned. I chatted with some neighbors who have a larger, > Solaris-based shop, and they said that with 10,000 user home filesystems, > their server could take an hour to boot (at least using the default startup > scripts). They reverted to having one big shared filesystem for all, but > would like to revisit the per-user approach with fewer users per server. As others have said, the NFS export is where the time gets expended. It is not necessary to have zfs do the exports for you. There is indeed value to each user having their own home directory. The 1000+ lines of output from 'df' is quite useful if it tells you exactly how much space each user has consumed so that you don't have to do something really evil like run 'du' in each directory to find the hogs. The ability to snapshot filesystems on a per-user basis is quite useful. Probably I shouldn't be answering since I have only used this at a small scale with a Solaris server (but with a FreeBSD client). Having a good NFS automounter on the clients is useful if you have a home directory per user. The AMD automounter which comes with FreeBSD is just barely competent for the task. It is able to automount user home directories on request but not enumerate them via 'ls /home/*' as Solaris and Apple OS X clients can. It will only list the currently automounted directories. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 03:58:12 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1E71F3EB for ; Thu, 4 Apr 2013 03:58:12 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id EFDCF302 for ; Thu, 4 Apr 2013 03:58:11 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNbJH-000A2o-FM; Wed, 03 Apr 2013 22:58:05 -0500 Message-ID: <515CFA63.8080808@physics.umn.edu> Date: Wed, 03 Apr 2013 22:58:27 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Bob Friesenhahn References: <515B84E8.2090202@physics.umn.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_05, TW_ZF autolearn=no version=3.3.2 Subject: Re: zfs home directories best practice X-SA-Exim-Version: 4.2 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 03:58:12 -0000 On 4/3/2013 9:20 PM, Bob Friesenhahn wrote: > On Tue, 2 Apr 2013, Graham Allan wrote: > >> We're building a new NFS home directory server on FreeBSD with ZFS. >> The Solaris ZFS Best Practices docs say to create a separate >> filesystem for each user home directory. My instinct is to ask "Are >> you serious???". My gut feeling isn't entirely logical but the idea of >> getting 1000+ lines of output from a simple "df" just feels wrong... ... > > As others have said, the NFS export is where the time gets expended. It > is not necessary to have zfs do the exports for you. > > There is indeed value to each user having their own home directory. The > 1000+ lines of output from 'df' is quite useful if it tells you exactly > how much space each user has consumed so that you don't have to do > something really evil like run 'du' in each directory to find the hogs. > The ability to snapshot filesystems on a per-user basis is quite useful. I haven't tried it yet though but I imagine the "zfs userquota" function can list each users space consumption much like on our existing (ahem) "legacy" unix system. The filesystem quota still does seem nicer since it can inherit its value from the parent object. Many times we've found we simply missed applying any quota to some handful of users on our existing system...! I would like to give users access to their own snapshots, but I feel sure it should be possible to permit that somehow, even if they aren't within the home directory. We did used to try this on our rsync-based backup system which started out with UFS snapshots which we would NFS-export to let users access them. When we switched that system to ZFS though, we found it would sometimes panic, we think when it was trying to delete a snapshot which someone had mounted over NFS. However that was a while back (FreeBSD 7.1 days) and we were probably doing a lot of things wrong... > Probably I shouldn't be answering since I have only used this at a small > scale with a Solaris server (but with a FreeBSD client). > > Having a good NFS automounter on the clients is useful if you have a > home directory per user. The AMD automounter which comes with FreeBSD > is just barely competent for the task. It is able to automount user > home directories on request but not enumerate them via 'ls /home/*' as > Solaris and Apple OS X clients can. It will only list the currently > automounted directories. Have to agree I'm not a big fan of "amd", though it also has a lot of complexity I haven't dug into. Almost all the clients will be RHEL (well, Scientific Linux) using autofs. Thanks! Graham -- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 04:32:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 69B335B8 for ; Thu, 4 Apr 2013 04:32:45 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 4881A3D3 for ; Thu, 4 Apr 2013 04:32:44 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.138]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNbqt-000ASk-4J; Wed, 03 Apr 2013 23:32:44 -0500 Message-ID: <515D0287.2060704@physics.umn.edu> Date: Wed, 03 Apr 2013 23:33:11 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Rick Macklem References: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <238802714.483457.1365033407086.JavaMail.root@erie.cs.uoguelph.ca> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_20, TW_ZF autolearn=no version=3.3.2 Subject: Re: zfs home directories best practice X-SA-Exim-Version: 4.2 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 04:32:45 -0000 On 4/3/2013 6:56 PM, Rick Macklem wrote: >> > Well, there isn't any limit to the # of exported file systems afaik, > but updating a large /etc/exports file takes quite a bit of time and > when you use mountd (the default) for this, you can have problems. > (You either have a period of time when no client can get response > from the server or a period of time when I/O fails because the > file system isn't re-exported yet.) > > If you choose this approach, you should look seriously at using > nfse (on sourceforge) instead of mountd. That's an interesting-looking project though I'm beginning to think that unless there's some serious downside to the "one big filesystem", I should just defer the per-user filesystems for the system after this one. As you remind me below, I'll probably have other issues to chase down besides that one (performance as well as making the jump to NFSv4...) > You might also want to contact Garrett Wollman w.r.t. the NFS > server patch(es) and setup he is using, since he has been > working through performance issues (relatively successfully > now, as I understand) for a fairly large NFS/ZFS server. > You should be able to find a thread discussing this on > freebsd-fs or freebsd-current. I found the thread "NFS server bottlenecks" on freebsd-hackers, which has a lot of interesting reading, and then also "NFS DRC size" on freebsd-fs. We might dig into some of that material (eg DRC-related patches) though I probably need to spend more time on basics first (kernel parameters, number of nfsd threads, etc). Thanks for the pointers, Graham -- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 06:01:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7F794B19 for ; Thu, 4 Apr 2013 06:01:37 +0000 (UTC) (envelope-from sodynet1@gmail.com) Received: from mail-da0-x235.google.com (mail-da0-x235.google.com [IPv6:2607:f8b0:400e:c00::235]) by mx1.freebsd.org (Postfix) with ESMTP id 5FDF47DC for ; Thu, 4 Apr 2013 06:01:37 +0000 (UTC) Received: by mail-da0-f53.google.com with SMTP id n34so989754dal.12 for ; Wed, 03 Apr 2013 23:01:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=/TY/Q5l/7su/OZLUbYo02U1W7pcjmOTghISS16oMrAw=; b=vFP8lqaBE5AtqXvz73nm+UHKZJZAR2Fbp/BMAdj9wffQp9b6QjjZ1aLJt0gl+FdFcu wVk8MS2XUIlyehwY0pM6S3m0zOHLjqPoN4HjcfUvnp/5OTV+3RXnSnNTN2i3mTOpaNZ/ 2VOk1Aq/E6tDN6LqsW1Yv342+MqCJQ2vxsvUR/3NtjqRO7rOwxP0Z0L6DaViadGARr08 SEwt6kLIzHi+OjGVQtlnue6/lhP/roOgZel3A9/4YpcH8A6qg3GXHEaR90oHCeqenz0t CgXZS3eLQXZUqLopPNEyU/xdoKJdZGPXSJWOHfr+vHLhhSyG+wAHJgH9JFkf+U3ri0Kd srzw== MIME-Version: 1.0 X-Received: by 10.66.179.238 with SMTP id dj14mr7793879pac.68.1365055297126; Wed, 03 Apr 2013 23:01:37 -0700 (PDT) Received: by 10.70.34.103 with HTTP; Wed, 3 Apr 2013 23:01:37 -0700 (PDT) Date: Thu, 4 Apr 2013 09:01:37 +0300 Message-ID: Subject: ZFS in production enviroments From: Sami Halabi To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 06:01:37 -0000 Hi, I've registered the last year to the list in order to get more involved in ZFS filesystem. I must admit i didn't install it yet in any prod machine, rather than in a VM for testing that I installed lately. I see a lots of bugs/patches/stability issues regarding ZFS, what makes me think: 1. is it really ready for production enviroments? 2. Is there anyone that installed it in prod and can give some feedback about stability, config? 3. from all the mails about reccomendations I've seen, is someone in fbsd-team taking the reccomendations and putting them somewhere in a one-document that describes all the suggestions rather than mailing lists? Thanks in advance, -- Sami Halabi Information Systems Engineer NMS Projects Expert FreeBSD SysAdmin Expert From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 07:27:09 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id BBDA852D for ; Thu, 4 Apr 2013 07:27:09 +0000 (UTC) (envelope-from henner.heck@web.de) Received: from mout.web.de (mout.web.de [212.227.15.4]) by mx1.freebsd.org (Postfix) with ESMTP id 6B56BAB1 for ; Thu, 4 Apr 2013 07:27:09 +0000 (UTC) Received: from sender ([95.112.148.249]) by smtp.web.de (mrweb102) with ESMTPSA (Nemesis) id 0LshWf-1Uljzz0XpB-011q1A for ; Thu, 04 Apr 2013 09:21:58 +0200 Message-ID: <515D2A18.6060305@web.de> Date: Thu, 04 Apr 2013 09:22:00 +0200 From: Henner Heck User-Agent: Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.2.15) Gecko/20110303 Thunderbird/3.1.9 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> <20825.62329.379765.344231@oz.mt.att.com> In-Reply-To: <20825.62329.379765.344231@oz.mt.att.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:wlaFm0Q1vKtRD50ZLYVvaHkGzhqLMkr/LkUy2zFcHIm RVMznRHIDvmJx5Nrw3oshSDvKHbxYd9ChW9LcTC5LYOx+HQQJH +djHGIX0A4hqvMkb7ePh9EFx/gpaRrzRQXFZ/2dEVC69yPCGcB M+bsyTxXCfIos6lh5To0RRTavAZbZJkTLCwkREB4FNFhrZMQPh RT4ARgmydjsDxrq8SU22w== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 07:27:09 -0000 Am 01.04.2013 22:52, schrieb Jay Borkenhagen: > Andriy Gapon writes: > > on 01/04/2013 22:57 Jay Borkenhagen said the following: > > > Hi FS, > > > > > > I am attempting to follow Niclas Zeising's updated directions at > > > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE for > > > "Installing FreeBSD 9.0-RELEASE (or later) Root on ZFS using GPT." > > > > It doesn't appear that you've mentioned what version of FreeBSD you are using. > > Hi, > > I realized I had forgotten to write that as soon as I hit 'send'. :) > > I am using a vanilla 9.1-RELEASE memstick to install. > > Jay B. > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > Hello, i too followed these instructions with the same result as you, "error 2" on boot. It seems to be all about the "zpool.cache" and when and how to export or not export the pool. After searching the web and trying out several modifications, the instructions below worked for me just yesterday with a FreeBSD 9.1-RELEASE Stick. The main change is not to export the pool before reboot and to create the zpool.cache at the end instead of copying an existing one, though i am not sure, if the second change is actually needed. Setting the pool's mountpoint to "none" might also not be necessary, but i'm a bit fed up with finding out right now by installing once again. I also modified the GPT partitioning a bit, but it is not crucial for success. ------------------------------------------------------------------------------------------------------------------------------- Edited version of https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE 2013-02-07 22:06:22 by Niclas Zeising Modifications by Henner Heck -- Installing FreeBSD 9.1-RELEASE Root on ZFS using GPT Creating a bootable ZFS filesystem Note: This walkthrough assumes that a zfs mirror of two disks is created, but the instructions work equally well for a single disk or a raidz or raidz2 setup, just replace 'mirror' as needed in the examples. Boot FreeBSD install CD/DVD or USB Memstick. Go through the initial setup as usual. Choose the 'Shell' option at Partitioning dialog in bsdinstall Create gpt disks, repeat this for all disks. # gpart create -s gpt ada0 # gpart create -s gpt ada1 Add initial partitions for boot loader and swap, and install the protective MBR and gptzfsboot boot loader. It is possible to have swap partitions on zfs pools, but this might lead to deadlocks in low memory situations. The boot loader partition gets the maximum allowed size of 512k, the start positions of all following partitions are aligned to 1M. # gpart add -s 512k -t freebsd-boot -l boot0 ada0 # gpart add -a 1M -s 8G -t freebsd-swap -l swap0 ada0 # gpart add -a 1M -t freebsd-zfs -l disk0 ada0 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 # gpart add -s 512k -t freebsd-boot -l boot1 ada1 # gpart add -a 1M -s 8G -t freebsd-swap -l swap1 ada1 # gpart add -a 1M -t freebsd-zfs -l disk1 ada1 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1 Load the necessary kernel modules # kldload opensolaris # kldload zfs Create the zfs pool. Replace mirror as needed, and add all disks that should be part of the pool. # zpool create -o altroot=/mnt -O canmount=off zroot mirror /dev/gpt/disk0 /dev/gpt/disk1 This will create a zpool called zroot, which will not be mounted. The canmount property is also set to off, to avoid mounting this by accident. This zpool is only used to derive other file systems from. Installing FreeBSD to the ZFS file system Create ZFS filesystem hierarchy The fletcher4 algorithm should be more robust than the fletcher2 algorithm. # zfs set checksum=fletcher4 zroot # zfs create -o mountpoint=/ zroot/ROOT # zfs create -o compression=on -o exec=on -o setuid=off -o mountpoint=/tmp zroot/tmp # chmod 1777 /mnt/tmp # zfs create -o mountpoint=/usr zroot/usr # zfs create zroot/usr/local # zfs create -o compression=lzjb -o setuid=off zroot/usr/ports # zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles # zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages Note: If you use nullfs or nfs to mount /usr/ports to different locations/servers, you will also need to nullfs/nfs mount /usr/ports/distfiles and/or /usr/ports/packages. # zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src # zfs create zroot/usr/obj # zfs create -o mountpoint=/var zroot/var # zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash # zfs create -o exec=off -o setuid=off zroot/var/db # zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg # zfs create -o exec=off -o setuid=off zroot/var/empty # zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log # zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail # zfs create -o exec=off -o setuid=off zroot/var/run # zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp # chmod 1777 /mnt/var/tmp # zfs create -o setuid=off -o mountpoint=/home zroot/home Note: If /home is mounted from someplace else, such as using NFS there is no need to create zroot/home above. Note: Compression may be set to on, off, lzjb, gzip, gzip-N (where N is an integer from 1 (fastest) to 9 (best compresion ratio. gzip is equivalent to gzip-6). Compression will cause some latency when accessing files on the ZFS filesystems. Use compression on ZFS filesystems which will not be accessed that often. When all zfs filesystems are created, exit from the shell and proceed with the installation as normal. It will be possible to drop into a shell in the newly installed system and configure it at this point. When the installation is finished, choose the Live CD option and log in as root Finishing touches Enable ZFS in the startup configuration to mount zfs filesystems on startup. # echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf # echo 'zfs_load="YES"' >> /mnt/boot/loader.conf # echo 'vfs.root.mountfrom="zfs:zroot/ROOT"' >> /mnt/boot/loader.conf Add swap devices to fstab, so that they will automatically show up when the system starts. # cat << EOF > /mnt/etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/gpt/swap0 none swap sw 0 0 /dev/gpt/swap1 none swap sw 0 0 EOF Set mountpoint of pool zroot to none # zfs set mountpoint=none zroot Set the bootfs: # zpool set bootfs=zroot/ROOT zroot Set read only on /var/empty, it is supposed to be empty at all times. # zfs set readonly=on zroot/var/empty Generate zpool cachefile, to be able to boot from the new pool. # zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot To finish the installation, simply reboot from the Live CD, do not forget to remove the installation media from the computer. # reboot ------------------------------------------------------------------------------------------------------------------------------- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 08:00:26 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E1082CB0 for ; Thu, 4 Apr 2013 08:00:26 +0000 (UTC) (envelope-from rainer@ultra-secure.de) Received: from mail.ultra-secure.de (mail.ultra-secure.de [78.47.114.122]) by mx1.freebsd.org (Postfix) with ESMTP id 3A54DD49 for ; Thu, 4 Apr 2013 08:00:25 +0000 (UTC) Received: (qmail 53689 invoked by uid 89); 4 Apr 2013 07:59:38 -0000 Received: by simscan 1.4.0 ppid: 53684, pid: 53686, t: 0.0423s scanners: attach: 1.4.0 clamav: 0.97.3/m:54/d:16953 Received: from unknown (HELO suse3) (rainer@ultra-secure.de@212.71.117.1) by mail.ultra-secure.de with ESMTPA; 4 Apr 2013 07:59:38 -0000 Date: Thu, 4 Apr 2013 09:59:37 +0200 From: Rainer Duffner To: freebsd-fs@freebsd.org Subject: Re: ZFS in production enviroments Message-ID: <20130404095937.687a0970@suse3> In-Reply-To: References: X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-suse-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 08:00:26 -0000 Am Thu, 4 Apr 2013 09:01:37 +0300 schrieb Sami Halabi : > Hi, > I've registered the last year to the list in order to get more > involved in ZFS filesystem. > I must admit i didn't install it yet in any prod machine, rather than > in a VM for testing that I installed lately. >=20 > I see a lots of bugs/patches/stability issues regarding ZFS, what > makes me think: > 1. is it really ready for production enviroments? It depends on the environment, I'm afraid. We don't have any problems with it anymore since 8.3 > 2. Is there anyone that installed it in prod and can give some > feedback about stability, config? Again, it depends on the use-case. We do some hosting of web-pages with it (a couple of hundred home-directories with a couple of hundred GB of data all together. Some (Much?) also depends on the hardware used. > 3. from all the mails about reccomendations I've seen, is someone in > fbsd-team taking the reccomendations and putting them somewhere in a > one-document that describes all the suggestions rather than mailing > lists? There is https://wiki.freebsd.org/ZFSTuningGuide =46rom time to time, the list gets postings like yours, with people nodding and agreeing that someone should write it all up.... It pays to trawl the archives of this list. =46rom what I have read on this list, there is no speed advantage with ZFS over UFS. Just much more flexibility and features (zero-copy snapshots, de-duplication, etc.pp.) What's you intended usage-scenario? From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 08:10:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D1E2610B for ; Thu, 4 Apr 2013 08:10:23 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 96F14DDA for ; Thu, 4 Apr 2013 08:10:23 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1UNfFP-00007p-8v for freebsd-fs@freebsd.org; Thu, 04 Apr 2013 10:10:15 +0200 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1UNfFO-0005jl-SG for freebsd-fs@freebsd.org; Thu, 04 Apr 2013 10:10:14 +0200 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: ZFS in production enviroments References: Date: Thu, 04 Apr 2013 10:10:14 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.14 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: -0.0 X-Spam-Status: No, score=-0.0 required=5.0 tests=BAYES_20 autolearn=disabled version=3.3.1 X-Scan-Signature: a2d32f98be707cbcda8602d5fffa976a X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 08:10:23 -0000 On Thu, 04 Apr 2013 08:01:37 +0200, Sami Halabi wrote: > Hi, > I've registered the last year to the list in order to get more involved > in > ZFS filesystem. > I must admit i didn't install it yet in any prod machine, rather than in > a > VM for testing that I installed lately. > > I see a lots of bugs/patches/stability issues regarding ZFS, what makes > me > think: > 1. is it really ready for production enviroments? Mailinglists have the habit of collecting negative stories because people with positive stories don't have a reason to mail about it. So it is natural that you see a lot of bugs/patches/stability issues on the list. If you follow the Linux releases you will also see that every release contains a lot of updates to the filesystems. Most patches are for edge cases. > 2. Is there anyone that installed it in prod and can give some feedback > about stability, config? Yes, at previous job a backup server with 96 disks running rsync from a lot of servers. Runs nice. Advice: Use amd64. Install a lot of memory. Furter tuning depends on what you want to do with it. > 3. from all the mails about reccomendations I've seen, is someone in > fbsd-team taking the reccomendations and putting them somewhere in a > one-document that describes all the suggestions rather than mailing > lists? I don't know. There might be something on wiki.freebsd.org. > > Thanks in advance, > Regards, Ronald. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 08:40:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 575708B9 for ; Thu, 4 Apr 2013 08:40:20 +0000 (UTC) (envelope-from mad@madpilot.net) Received: from winston.madpilot.net (winston.madpilot.net [78.47.75.155]) by mx1.freebsd.org (Postfix) with ESMTP id 1A235EFC for ; Thu, 4 Apr 2013 08:40:19 +0000 (UTC) Received: from winston.madpilot.net (localhost [127.0.0.1]) by winston.madpilot.net (Postfix) with ESMTP id 3ZhHfN1Gf3zFTWC for ; Thu, 4 Apr 2013 10:40:12 +0200 (CEST) X-Virus-Scanned: amavisd-new at madpilot.net Received: from winston.madpilot.net ([127.0.0.1]) by winston.madpilot.net (winston.madpilot.net [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5yX26HR1eFDo for ; Thu, 4 Apr 2013 10:40:08 +0200 (CEST) Received: from vwg82.hq.ignesti.it (unknown [77.246.14.1]) by winston.madpilot.net (Postfix) with ESMTPSA for ; Thu, 4 Apr 2013 10:40:08 +0200 (CEST) Message-ID: <515D3C64.30603@madpilot.net> Date: Thu, 04 Apr 2013 10:40:04 +0200 From: Guido Falsi User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130312 Thunderbird/17.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: zfs home directories best practice References: <515B84E8.2090202@physics.umn.edu> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 08:40:20 -0000 On 04/04/13 04:20, Bob Friesenhahn wrote: > Having a good NFS automounter on the clients is useful if you have a > home directory per user. The AMD automounter which comes with FreeBSD > is just barely competent for the task. It is able to automount user > home directories on request but not enumerate them via 'ls /home/*' as > Solaris and Apple OS X clients can. It will only list the currently > automounted directories. amd has a "browsable_dirs" directive in it's configuration file which should do exactly what you ask. On a small scale it works fine, I don't know if it has heavy downsides on a large scale system like the one you are talking about. -- Guido Falsi From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 12:36:04 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 35E579C9 for ; Thu, 4 Apr 2013 12:36:04 +0000 (UTC) (envelope-from joh.hendriks@gmail.com) Received: from mail-ea0-x235.google.com (mail-ea0-x235.google.com [IPv6:2a00:1450:4013:c01::235]) by mx1.freebsd.org (Postfix) with ESMTP id C3430DB5 for ; Thu, 4 Apr 2013 12:36:03 +0000 (UTC) Received: by mail-ea0-f181.google.com with SMTP id z10so972238ead.12 for ; Thu, 04 Apr 2013 05:36:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=ubuKgoO8t48bA8FeGZvnG70D/rL0oC+lb7xVKvzGq7k=; b=reOGIRkjQxjaIRn2LY9AX8+jyvCPRVK711la89hkkms0RC7yKOwvCFiAcicCoXL1vK z4AsKNaKsVIu7y3mxf6uQfMtJWEG8i47/PmhP6HGXce5cowADjjXJ8QU5lWEceWHVrYX Z6mg5sUSK6JiQaXiek4cyZ9sP3V0yuAztcH3sQDktUPHsMtBrCFfGEaR8zmOdrUqo3mf U4GREj92zWmoSAIzX4Ii9rScc+qxl0WQN4Dtx+zJZb3OIa3vgyc3vz4Gg/SxyOSv3h3b XYMhOxaFFoACjNh8wxwQfwiQf1fSFltMIqn/v4B17gU9T6faviKI5NZSFk6YI4PW1MeT SY8Q== X-Received: by 10.14.207.200 with SMTP id n48mr10843931eeo.4.1365078962917; Thu, 04 Apr 2013 05:36:02 -0700 (PDT) Received: from [192.168.1.129] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPS id cd3sm1273979eeb.6.2013.04.04.05.36.01 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Apr 2013 05:36:02 -0700 (PDT) Message-ID: <515D73B0.2060903@gmail.com> Date: Thu, 04 Apr 2013 14:36:00 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Sami Halabi , freebsd-fs@freebsd.org Subject: Re: ZFS in production enviroments References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 12:36:04 -0000 Sami Halabi schreef: > Hi, > I've registered the last year to the list in order to get more involved in > ZFS filesystem. > I must admit i didn't install it yet in any prod machine, rather than in a > VM for testing that I installed lately. > > I see a lots of bugs/patches/stability issues regarding ZFS, what makes me > think: > 1. is it really ready for production enviroments? I see a lot of bugfixes for my Microsoft Windows servers too, is it ready for production? > 2. Is there anyone that installed it in prod and can give some feedback > about stability, config? We use a 24 bay supermicro server with FreeBSD 9.0 and 8 300 GB sas drives as our NAS and ESXi backend. We have had zero problems with it till now and it runs for more than a year now. So for us it works very well. The ESXi clients connect to the server through NFS The windows client store there profiles and home dirs on the server with samba We have two ESXi hypervisors each running 5 servers and a couple of workstations. > 3. from all the mails about reccomendations I've seen, is someone in > fbsd-team taking the reccomendations and putting them somewhere in a > one-document that describes all the suggestions rather than mailing lists? We do not use any tuning beside one loader.conf setting where we set a max to the arc to about 3/4 of the available memory. The box has 16 GB of ram So for this machine we use the following. vfs.zfs.arc_max="12G" > > Thanks in advance, Your welcome From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 12:46:34 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id ACF9FCB7 for ; Thu, 4 Apr 2013 12:46:34 +0000 (UTC) (envelope-from feld@feld.me) Received: from new1-smtp.messagingengine.com (new1-smtp.messagingengine.com [66.111.4.221]) by mx1.freebsd.org (Postfix) with ESMTP id 8197FE3E for ; Thu, 4 Apr 2013 12:46:33 +0000 (UTC) Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id 52977BC2 for ; Thu, 4 Apr 2013 08:46:33 -0400 (EDT) Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161]) by compute4.internal (MEProxy); Thu, 04 Apr 2013 08:46:33 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=feld.me; h= content-type:to:subject:references:date:mime-version :content-transfer-encoding:from:message-id:in-reply-to; s= mesmtp; bh=RHfv1nBOrnKtwqMmyO9JhzSp1JE=; b=URGGNNm4uutnV3em2uUsX 0lH9XE/PTda4EtS7n7Z3TtEY0XGg6QWFAxx0qwj7ztSl7s5u0F900xdX2VIYrYsn l2ACkepaRABxZkvkTw2G0LmJ77eSCN463V7w3k+7Fd37VoNwKpVSDlh9YnGXpGJt QP500WskbLZ/vF6mhXbkVo= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=content-type:to:subject:references:date :mime-version:content-transfer-encoding:from:message-id :in-reply-to; s=smtpout; bh=RHfv1nBOrnKtwqMmyO9JhzSp1JE=; b=VW0z /IKoH9c1UmRSvT94bKRUPFukdLF84HJnKQFHt/Ut9unTTHeYlJYW0bs1Z/x1B+b2 QAUCVTc9UVwRUZML9l9aUUe6mcpvjoNfaPBXV9ybCnNoK82jrgyRtrk/PW4UXkUG gc/nr7oYEmU+hEyMmKcVPpaEIv3diiDpw8zuDqo= X-Sasl-enc: mhgPdMUupZB/8y2e/+dL2U7D+p14ezOISO8kPv8fDsJa 1365079593 Received: from tech304.office.supranet.net (unknown [66.170.8.18]) by mail.messagingengine.com (Postfix) with ESMTPA id EF9A62000E4 for ; Thu, 4 Apr 2013 08:46:32 -0400 (EDT) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: freebsd-fs@freebsd.org Subject: Re: ZFS in production enviroments References: Date: Thu, 04 Apr 2013 07:46:32 -0500 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Mark Felder" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.14 (FreeBSD) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 12:46:34 -0000 Our setup: * FreeBSD 9-STABLE (from before 9.0-RELEASE) * HP DL360 servers acting as "head units" * LSI SAS 9201-16e controllers * Intel NICs * DataOn Storage DNS-1630 JBODs with dual controllers (LSI based) * 2TB 7200RPM Hitachi SATA HDs with SAS interposers (LSISS9252) * Intel SSDs for cache/log devices * gmultipath is handling the active/active data paths to the drives. ex: ZFS uses multipath/disk01 in the pool * istgt serving iSCSI to Xen and ESXi from zvols Built these just before the hard drive prices spiked from the floods. I need to jam more RAM in there and it would be nice to be running FreeBSD 10 with some of the newer ZFS code and having access to TRIM. Uptime on these servers is over a year. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 13:16:32 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AF3CD367 for ; Thu, 4 Apr 2013 13:16:32 +0000 (UTC) (envelope-from feld@feld.me) Received: from new1-smtp.messagingengine.com (new1-smtp.messagingengine.com [66.111.4.221]) by mx1.freebsd.org (Postfix) with ESMTP id 819A2FB7 for ; Thu, 4 Apr 2013 13:16:31 +0000 (UTC) Received: from compute4.internal (compute4.nyi.mail.srv.osa [10.202.2.44]) by gateway1.nyi.mail.srv.osa (Postfix) with ESMTP id EF0B2526; Thu, 4 Apr 2013 09:16:30 -0400 (EDT) Received: from frontend2.nyi.mail.srv.osa ([10.202.2.161]) by compute4.internal (MEProxy); Thu, 04 Apr 2013 09:16:30 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=feld.me; h= content-type:to:cc:subject:references:date:mime-version :content-transfer-encoding:from:message-id:in-reply-to; s= mesmtp; bh=nCG2i0DepmfUzU9mVyNrdrFfCAA=; b=N+rh855xdaY/bZbtJFoBc 2+NAkfVj899+DR7+aj6MeTRQkvC++80gReESyFoaa1KZ1K1NK2jxArgQGi7CTjFy tLO9vzgxpPxCmGl0S/61gPbUw9p4HpjpwvpKttOLv3VPCHLcmwMVk5uWO8ySx8g2 UFDFYObrkCQ/GaIpgCO/ow= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=content-type:to:cc:subject:references :date:mime-version:content-transfer-encoding:from:message-id :in-reply-to; s=smtpout; bh=nCG2i0DepmfUzU9mVyNrdrFfCAA=; b=uZWR HtnrjeCrngP3HLxw4ZSiYIfLxGdbOcAc+BYZLS98oSmutObz345gPThv13ohBaod lDdlxgffSbAfqSZ9p3upVHU504+iJxOOkKIgWPvZc4Qtl776j3PifbYi+uRIVgvG A+ks546ZyyF565dgiCkfYuq9BYGntzHHfpb0AcA= X-Sasl-enc: Zxw5VN59dHCgFJqzBmSYHD+y2EfOG+KVVVX56uBJxYNY 1365081390 Received: from tech304.office.supranet.net (unknown [66.170.8.18]) by mail.messagingengine.com (Postfix) with ESMTPA id 961F7200048; Thu, 4 Apr 2013 09:16:30 -0400 (EDT) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "dennis berger" Subject: Re: ZFS in production enviroments References: <2E76F6A1-F9F8-453D-8C11-3444BB6BFE19@nipsi.de> Date: Thu, 04 Apr 2013 08:16:29 -0500 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: "Mark Felder" Message-ID: In-Reply-To: <2E76F6A1-F9F8-453D-8C11-3444BB6BFE19@nipsi.de> User-Agent: Opera Mail/12.14 (FreeBSD) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 13:16:32 -0000 On Thu, 04 Apr 2013 08:02:04 -0500, dennis berger wrote: > I case of a failure do you switch the head units manually?! This was not designed to be HA storage. If we wanted to do that I'd have figured out how to bring HAST into the mix but HAST+ZFS is very messy. However, if a head unit completely died we'd just connect the JBODs to the other head unit and import the pool (you can daisy-chain them if you're out of ports on the controller). However, we do as a general rule stay under 50% storage so if we really needed to we could move all data to the other server without customers knowing, perform maintenance, and move data back. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 13:21:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BE69974C for ; Thu, 4 Apr 2013 13:21:50 +0000 (UTC) (envelope-from db@nipsi.de) Received: from fop.bsdsystems.de (mx.bsdsystems.de [88.198.57.43]) by mx1.freebsd.org (Postfix) with ESMTP id 3A3E667 for ; Thu, 4 Apr 2013 13:21:49 +0000 (UTC) Received: from wuerschtl.net.local (p579A6A12.dip.t-dialin.net [87.154.106.18]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by fop.bsdsystems.de (Postfix) with ESMTP id 0148B51CA6; Thu, 4 Apr 2013 15:02:04 +0200 (CEST) Subject: Re: ZFS in production enviroments Mime-Version: 1.0 (Apple Message framework v1085) From: dennis berger In-Reply-To: Date: Thu, 4 Apr 2013 15:02:04 +0200 Message-Id: <2E76F6A1-F9F8-453D-8C11-3444BB6BFE19@nipsi.de> References: To: Mark Felder X-Mailer: Apple Mail (2.1085) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 13:21:50 -0000 I case of a failure do you switch the head units manually?! -dennis=20 Am 04.04.2013 um 14:46 schrieb Mark Felder: > Our setup: >=20 > * FreeBSD 9-STABLE (from before 9.0-RELEASE) > * HP DL360 servers acting as "head units" > * LSI SAS 9201-16e controllers > * Intel NICs > * DataOn Storage DNS-1630 JBODs with dual controllers (LSI based) > * 2TB 7200RPM Hitachi SATA HDs with SAS interposers (LSISS9252) > * Intel SSDs for cache/log devices > * gmultipath is handling the active/active data paths to the drives. = ex: ZFS uses multipath/disk01 in the pool > * istgt serving iSCSI to Xen and ESXi from zvols >=20 > Built these just before the hard drive prices spiked from the floods. = I need to jam more RAM in there and it would be nice to be running = FreeBSD 10 with some of the newer ZFS code and having access to TRIM. = Uptime on these servers is over a year. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 13:29:07 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0676C978 for ; Thu, 4 Apr 2013 13:29:07 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id D8663104 for ; Thu, 4 Apr 2013 13:29:06 +0000 (UTC) Received: from c-174-53-189-64.hsd1.mn.comcast.net ([174.53.189.64] helo=[192.168.0.136]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UNkDt-000JCF-Cs; Thu, 04 Apr 2013 08:29:03 -0500 Message-ID: <515D8011.9050806@physics.umn.edu> Date: Thu, 04 Apr 2013 08:28:49 -0500 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Sami Halabi , freebsd-fs@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-3.0 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_00, TW_ZF autolearn=no version=3.3.2 Subject: Re: ZFS in production enviroments X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 13:29:07 -0000 Like you, we watched for a long time before jumping in. We ran ZFS for quite a few years doing it the "wrong" way - having a filesystem on a single volume (mapped from our SAN). This was on a fairly low-capacity data-backup server (4TB or so) and although this misses a lot of the features of ZFS, it did give us some basic experience using it as a filesystem. More recently we run a couple of "bulk data" servers for compute clusters, with between 40-120 drives each. We used: Dell R710 or R720 as head node, 48-64GB RAM, starting from FreeBSD 9.1-RC1. multiple LSI SAS 9205-8e HBAs (should probably have looked at -16e) Intel 10GBe ethernet (old-style CX4 adapters, we are cheapskates :-) Supermicro SC847 E16-RJBOD1 45-bay SAS chassis (this is just the single-channel model) WD 3TB Red drives Intel 313 SSD log cache mirror randomly-selected L2ARC SSD (currently some mind of Samsung) each zfs pool made of four 10-drive raidz2 vdevs plus associated SSD drives (fits self-contained into one JBOD chassis). This has performed really well even though we have barely done any NFS tuning yet. For home directories for which I've have been asking advice on this list, for building in the next few weeks, we will probably use dual-path chassis and gmultipath, WD RE-series SAS drives, and some variety of mirroring rather than raidz. None of these are meant to be high-availability, we'd just swap connections to a different head unit in case of failure. Graham On 4/4/2013 1:01 AM, Sami Halabi wrote: > Hi, > I've registered the last year to the list in order to get more involved in > ZFS filesystem. > I must admit i didn't install it yet in any prod machine, rather than in a > VM for testing that I installed lately. > > I see a lots of bugs/patches/stability issues regarding ZFS, what makes me > think: > 1. is it really ready for production enviroments? > 2. Is there anyone that installed it in prod and can give some feedback > about stability, config? > 3. from all the mails about reccomendations I've seen, is someone in > fbsd-team taking the reccomendations and putting them somewhere in a > one-document that describes all the suggestions rather than mailing lists? From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 15:40:02 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 108EB530 for ; Thu, 4 Apr 2013 15:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 03698AEF for ; Thu, 4 Apr 2013 15:40:02 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r34Fe10A057205 for ; Thu, 4 Apr 2013 15:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r34Fe1Ka057203; Thu, 4 Apr 2013 15:40:01 GMT (envelope-from gnats) Date: Thu, 4 Apr 2013 15:40:01 GMT Message-Id: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> To: freebsd-fs@FreeBSD.org Cc: From: Andriy Gapon Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Andriy Gapon List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 15:40:02 -0000 The following reply was made to PR kern/177536; it has been noted by GNATS. From: Andriy Gapon To: Martin Birgmeier Cc: bug-followup@FreeBSD.org Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load Date: Thu, 04 Apr 2013 18:31:39 +0300 on 02/04/2013 21:07 Martin Birgmeier said the following: > Thanks for the pointer - which I knew about, but unfortunately found out > too late that the kernel needs to be compiled with options STACK and/or > options DDB/KDB for this to work. > > Could this information be added to that page? Well, I do not see any reason for people to not include at least STACK in their kernel. It's in GENERIC too. you are the first one to run into this kind of an issue :-) > I already recompiled the kernel, but are ambivalent about whether I > should now hope to see the deadlock again... ;-) > > One more thing: Could you point me to the SVN revision in head and/or > stable/9 which presumably contains the (partial) fix? I believe that with r244626 from 2012-12-23 you should not experience hang on shutdown. > On 04/01/13 22:51, Andriy Gapon wrote: >> Two points: >> - the problem may be (partially) fixed in stable/9 >> - https://wiki.freebsd.org/AvgZfsDeadlockDebug >> > -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:03:15 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 36BC3FCE; Thu, 4 Apr 2013 16:03:15 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 4FB2DD42; Thu, 4 Apr 2013 16:03:13 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA10603; Thu, 04 Apr 2013 19:03:10 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <515DA43D.7070805@FreeBSD.org> Date: Thu, 04 Apr 2013 19:03:09 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130313 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jay Borkenhagen Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> <20825.62329.379765.344231@oz.mt.att.com> In-Reply-To: <20825.62329.379765.344231@oz.mt.att.com> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:03:15 -0000 on 01/04/2013 23:52 Jay Borkenhagen said the following: > Andriy Gapon writes: > > on 01/04/2013 22:57 Jay Borkenhagen said the following: > > > Hi FS, > > > > > > I am attempting to follow Niclas Zeising's updated directions at > > > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE for > > > "Installing FreeBSD 9.0-RELEASE (or later) Root on ZFS using GPT." > > > > It doesn't appear that you've mentioned what version of FreeBSD you are using. > > Hi, > > I realized I had forgotten to write that as soon as I hit 'send'. :) > > I am using a vanilla 9.1-RELEASE memstick to install. My first suggestion would be to try an image with recent stable/9 if you can find or produce it. Failing that, you can set vfs.zfs.debug=1 at loader prompt before booting. That could shed some light on what is going wrong. The most likely possibility is that /boot/zfs/zpool.cache in the boot/root filesystem of the boot/root pool does not have an entry for the root pool. Further, I believe that instructions on the Niclas' page won't result in a bootable pool with 9.0 or 9.1. With stable/9 they should work. The problem is that zpool.cache is not populated at all. You can try to specify cachefile property to zpool create and then copy zpool.cache to boot/zfs/ on the newly created pool. -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:05:22 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D9C78226 for ; Thu, 4 Apr 2013 16:05:22 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from mail-ob0-x234.google.com (mail-ob0-x234.google.com [IPv6:2607:f8b0:4003:c01::234]) by mx1.freebsd.org (Postfix) with ESMTP id A297CD77 for ; Thu, 4 Apr 2013 16:05:22 +0000 (UTC) Received: by mail-ob0-f180.google.com with SMTP id wo10so2753441obc.11 for ; Thu, 04 Apr 2013 09:05:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dragondata.com; s=google; h=x-received:content-type:mime-version:subject:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer; bh=TQFP4tDfzlXYOh85U2QWmdG8n/DzbNIzlgZPozIVOlI=; b=LAEKpbS7Dv799DK57ow32lY+Q1xaMvdnF+J0nQzYYm1tuPImUg+zIiHsISMCOoxrYJ c9guFAv6r8xJ/LRM5zoG41vLclHd+FGJ8NQs68Vgyt9BmTOje4Os7IOV48LwYHXSdg2S +Ro9tQAWnliW4tjC5QlQ4z3eSGHAhusoMbrmc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:content-type:mime-version:subject:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=TQFP4tDfzlXYOh85U2QWmdG8n/DzbNIzlgZPozIVOlI=; b=NMS2tKJbtArUEwRv0rg+CiBHrzn0zNmsbVDQqw+AQvSqWUVQQdd7U80uho4n1qIPgw cjDapFV7yAz2on3kuPQH2ohfQ9diKiPkPD1YnO6DDST1CeHCds0wd1pbdiziCepJkFkm y8xoOhqAwqsOCgyaU94/sCk/PBSfngIt74IIKSfsMyKkBiZ3DZP1UzKnVLbHCXm6AWOV /U5ntXUYfT0ITuA7nfie3DP+5A8Lue6ccvrfMuUNPg9AHrRSvFJKTQHIUC3M8CbuMRf0 T0hrhpiS34vAfEHMNM1OTvgZxGBT5ep4LBQlM/X71n7My7Lv1yDPeB/WoyStwkuNC6iJ ocyw== X-Received: by 10.60.3.200 with SMTP id e8mr4777121oee.94.1365091522044; Thu, 04 Apr 2013 09:05:22 -0700 (PDT) Received: from vpn132.rw1.your.org (vpn132.rw1.your.org. [204.9.51.132]) by mx.google.com with ESMTPS id qk4sm6964383obc.5.2013.04.04.09.05.19 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Apr 2013 09:05:20 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load From: Kevin Day In-Reply-To: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> Date: Thu, 4 Apr 2013 11:05:17 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> To: Andriy Gapon X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQnO4Lb8JRoiOr7RkQRXqi+PHG844F3MwLzLcLTwVTN6a6EC8/kmbLFa7DO1wpEQo/Bi3APl Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:05:22 -0000 I'm not sure if I'm experiencing the same thing, but I'm chiming-in in = case this helps someone. We have a server that's configured as such: 9.1-RELEASE amd64, dual Opteron CPU, 64GB of memory. nVidia nForce MCP55 SATA controller * 1x 240GB SSD used for ZFS l2arc mps LSI 9207-8e (LSISAS2308 chip> * Connected to 4 external enclosures, each with 24 3TB drives for a = total of 96 3TB drives, running ZFS in a JBOD configuration twa 3ware 9650SE-12i * Connected 1:1 (no expander) to 12 internal 500GB drives, running UFS = for / and a secondary UFS filesystem When there's very heavy write load to the giant ZFS filesystem (>2gbps = of total incoming data being written), eventually I reach some kind of = deadlock, where I can't do anything that touches any of the block = devices. Processes that attempt to access any filesystem (ZFS or UFS) will get = stuck in 'ufs', 'getblk', 'vnread', or 'tx->tx'. A shell is still = responsive, and I can run commands as long as they're cached. Trying to = run something that wasn't already cached prior to the problem will hang = that shell. 'gstat' shows that most(all?) of the disk devices have outstanding = requests waiting, but a busy percentage of 0% and no activity happening. This only seems to happen under heavy ZFS writes. Heavy ZFS reads, or = heavy UFS writes do not trigger this. Slowing down the ZFS writes will = prevent the problem from occurring. At first I thought this was a controller hang, but after seeing that = devices on three different controllers are all ending up stuck with = outstanding requests is making me a bit confused as to how this could = even happen. Nothing gets logged to the console when this happens.=20 Things I've tried already: 1) Remove the SSD entirely 2) zfs set sync=3Ddisabled fs 3) Letting the system wait (90 minutes) to see if this recovers. 4) Swapped the motherboard/CPUs/memory for an identically configured = system 5) Switched from an LSI 9280 (mpt) to an LSI 9207 (mps) 6) Updated firmware on the storage cards, updated the BIOS on the = motherboard Fair disclosure, these Opterons do have the TLB bug (AMD errata 298), = but the BIOS has a workaround for it which is enabled. We've got dozens = of identical systems to this and aren't experiencing any weird hangs or = anything elsewhere, so I'm assuming this is not it. The problem is that this is a production system that doesn't give me a = lot of time for troubleshooting before I'm forced to reboot it. I'm = going to try to get procstat to stay in the cache so that next time this = happens I can try running it. If there's anything else anyone would like = me to capture when this happens again I'm happy to try.=20 -- Kevin From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:07:05 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 3BACE2BE for ; Thu, 4 Apr 2013 16:07:05 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 6F02FD97 for ; Thu, 4 Apr 2013 16:07:04 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA10637; Thu, 04 Apr 2013 19:07:02 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <515DA525.3020006@FreeBSD.org> Date: Thu, 04 Apr 2013 19:07:01 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130313 Thunderbird/17.0.4 MIME-Version: 1.0 To: Kevin Day Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load References: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:07:05 -0000 on 04/04/2013 19:05 Kevin Day said the following: [a lot] One link: https://wiki.freebsd.org/AvgZfsDeadlockDebug -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:12:53 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4C9B0491 for ; Thu, 4 Apr 2013 16:12:53 +0000 (UTC) (envelope-from toasty@dragondata.com) Received: from mail-ob0-x234.google.com (mail-ob0-x234.google.com [IPv6:2607:f8b0:4003:c01::234]) by mx1.freebsd.org (Postfix) with ESMTP id 153D1DF8 for ; Thu, 4 Apr 2013 16:12:53 +0000 (UTC) Received: by mail-ob0-f180.google.com with SMTP id wo10so2762683obc.11 for ; Thu, 04 Apr 2013 09:12:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dragondata.com; s=google; h=x-received:content-type:mime-version:subject:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer; bh=w8NskagA7ayeqhFwDoWnlxlRk/W5SadekaBvMW0ZZGw=; b=IjakrIiBgTeeU4riG4FrgR5vJrQAO6ebxZvc+4VfVgH81YHIiGxSuuYCdYGi6hC2LN FoAXYfqIU16xxXxtsf+zKTLP32j0Azr21cDax4Vd+gFyis1TZKAavdWCfbW0KvYAqAwX 5/XLD2JNK4mmYd1Zdy+94mzXZ5Evt/OA1bCTc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:content-type:mime-version:subject:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer :x-gm-message-state; bh=w8NskagA7ayeqhFwDoWnlxlRk/W5SadekaBvMW0ZZGw=; b=H5ldnDTYMlF8QzQF8bbyrvTrsEfDbquRC+qC6EAFbhuaH8uyV+tNm+a10lNr/v0SYR dz+vBO7t75jKWneGqrlm5GodfNtDv7a2s1R5geVI0bQr1Abf+hUyGton++JIWnBSdXtq XPK+UbVfsmpO74p3io46Ov9WARn2l5c0lRY5U+PVFE54GZYw8mgBblvdbw6SH4So+VsD xY8MxIv4Pxmye4IDTyhRrK13oor6tz/EqQLixNge1vY04s2P6m/zzd/SMp8Uq3zEOazu 0QuY30nS2dOlclLJGoaR7mOs8W9QU/Qt+DNo+z+cqvOZU3u8NkC4EF8noxBOCRFUvDId 71iw== X-Received: by 10.60.20.225 with SMTP id q1mr4714996oee.31.1365091972568; Thu, 04 Apr 2013 09:12:52 -0700 (PDT) Received: from vpn132.rw1.your.org (vpn132.rw1.your.org. [204.9.51.132]) by mx.google.com with ESMTPS id t9sm6998109obk.13.2013.04.04.09.12.50 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Apr 2013 09:12:51 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load From: Kevin Day In-Reply-To: <515DA525.3020006@FreeBSD.org> Date: Thu, 4 Apr 2013 11:12:48 -0500 Content-Transfer-Encoding: quoted-printable Message-Id: References: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> <515DA525.3020006@FreeBSD.org> To: Andriy Gapon X-Mailer: Apple Mail (2.1503) X-Gm-Message-State: ALoCoQmxKKCWBFihvlE/3lG69B6FsYqkOfOsyAVynEDv5Qw08OLyFFRGO7Z8j0pdlajx8zOsOjiK Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:12:53 -0000 On Apr 4, 2013, at 11:07 AM, Andriy Gapon wrote: > on 04/04/2013 19:05 Kevin Day said the following: > [a lot] >=20 > One link: https://wiki.freebsd.org/AvgZfsDeadlockDebug Sorry, should have mentioned i've seen this. I've tried procstat, but i've failed at keeping it cached long enough to = use it when this happens. If I try running it, the shell gets stuck in = 'ufs'. I'm going to add a cron job to just run procstat periodically so = hopefully i can run it without it needing to touch the disks. I built a more debug friendly kernel and tried to drop to ddb when this = happened, but it didn't provide anything useful. When I hit 'enter' on = 'alltrace' it hard locked without printing anything. I haven't been able to trigger a core dump, and this system has no = serial or firewire ports for live debugging. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:21:08 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1BC4776B for ; Thu, 4 Apr 2013 16:21:08 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 6603EE7F for ; Thu, 4 Apr 2013 16:21:07 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA10872; Thu, 04 Apr 2013 19:21:05 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <515DA870.6050006@FreeBSD.org> Date: Thu, 04 Apr 2013 19:21:04 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130313 Thunderbird/17.0.4 MIME-Version: 1.0 To: Kevin Day Subject: Re: kern/177536: zfs livelock (deadlock) with high write-to-disk load References: <201304041540.r34Fe1Ka057203@freefall.freebsd.org> <515DA525.3020006@FreeBSD.org> In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:21:08 -0000 on 04/04/2013 19:12 Kevin Day said the following: > > On Apr 4, 2013, at 11:07 AM, Andriy Gapon wrote: > >> on 04/04/2013 19:05 Kevin Day said the following: [a lot] >> >> One link: https://wiki.freebsd.org/AvgZfsDeadlockDebug > > > Sorry, should have mentioned i've seen this. > > I've tried procstat, but i've failed at keeping it cached long enough to use it > when this happens. If I try running it, the shell gets stuck in 'ufs'. I'm > going to add a cron job to just run procstat periodically so hopefully i can > run it without it needing to touch the disks. Well, stuck ufs points towards to the storage subsystem. You can create a memory disk+fs (see mdconfig, mdmfs) and place some tools there. That way you may be able to get more info and also check if it's VFS or some other common that gets stuck or if it's the real storage indeed. > I built a more debug friendly kernel and tried to drop to ddb when this > happened, but it didn't provide anything useful. When I hit 'enter' on > 'alltrace' it hard locked without printing anything. > > I haven't been able to trigger a core dump, and this system has no serial or > firewire ports for live debugging. > > -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:44:13 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B31CAC61; Thu, 4 Apr 2013 16:44:13 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo05.seg.att.com (nbfkord-smmo05.seg.att.com [209.65.160.92]) by mx1.freebsd.org (Postfix) with ESMTP id 4975FFD2; Thu, 4 Apr 2013 16:44:12 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo05.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id 6ddad515.0.202767.00-392.566794.nbfkord-smmo05.seg.att.com (envelope-from ); Thu, 04 Apr 2013 16:44:13 +0000 (UTC) X-MXL-Hash: 515daddd4b11a3c9-f2ce2627afef554792a7987dd53e0ace2cc5ccd8 Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34Gi6Wq007728; Thu, 4 Apr 2013 12:44:06 -0400 Received: from alpi131.aldc.att.com (alpi131.aldc.att.com [130.8.218.69]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34Ghvrv007570 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 4 Apr 2013 12:44:01 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi131.aldc.att.com (RSA Interceptor); Thu, 4 Apr 2013 17:43:46 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34GhkFY029362; Thu, 4 Apr 2013 12:43:46 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34GhfPT029184; Thu, 4 Apr 2013 12:43:42 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id 088FB6807DD; Thu, 4 Apr 2013 12:43:40 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) Message-ID: <20829.44475.910011.453770@oz.mt.att.com> Date: Thu, 4 Apr 2013 12:43:39 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit From: Jay Borkenhagen To: Andriy Gapon Subject: Re: mounting failed with error 2 In-Reply-To: <515DA43D.7070805@FreeBSD.org> References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> <20825.62329.379765.344231@oz.mt.att.com> <515DA43D.7070805@FreeBSD.org> X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:44:13 -0000 Hi Andriy, Thanks for your response. Andriy Gapon writes: > > My first suggestion would be to try an image with recent stable/9 > if you can find or produce it. A large part of my interest here is in helping Niclas perfect his GPT/ZFS installation instructions, so I'll not pursue the stable/9 approach at this time. (I have two GPT/ZFS systems running 9.1-RELEASE based on earlier instructions from Niclas, so if I absolutely need another such system now I do have a way to get there.) > Failing that, you can set vfs.zfs.debug=1 at loader prompt before > booting. That could shed some light on what is going wrong. The > most likely possibility is that /boot/zfs/zpool.cache in the > boot/root filesystem of the boot/root pool does not have an entry > for the root pool. I just tried 'set vfs.zfs.debug=1' then 'boot' at the loader prompt, and it seems I wound up at the exact same place with no further debug diagnostics. I believe the important part of that error output is this: =============== Trying to mount root from zfs:zroot/ROOT []... Mounting from zfs:zroot/ROOT failed with error 2. Loader variables: vfs.root.mountfrom=zfs:zroot/ROOT =============== If there's something else you'd like me to specify to the boot loader, please let me know and I will give it a try today. > Further, I believe that instructions on the Niclas' page won't > result in a bootable pool with 9.0 or 9.1. With stable/9 they > should work. The problem is that zpool.cache is not populated at > all. You can try to specify cachefile property to zpool create and > then copy zpool.cache to boot/zfs/ on the newly created pool. I would be willing to attempt a re-install using Niclas' instructions plus something to populate the zpool.cache. Can you (or Niclas) suggest what command(s) to add to the process at which stage? Thank you. Jay B. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:46:44 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 2B80BD08; Thu, 4 Apr 2013 16:46:44 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 422C7FF0; Thu, 4 Apr 2013 16:46:42 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id TAA11146; Thu, 04 Apr 2013 19:46:40 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <515DAE70.6090502@FreeBSD.org> Date: Thu, 04 Apr 2013 19:46:40 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130313 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jay Borkenhagen Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> <20825.62329.379765.344231@oz.mt.att.com> <515DA43D.7070805@FreeBSD.org> <20829.44475.910011.453770@oz.mt.att.com> In-Reply-To: <20829.44475.910011.453770@oz.mt.att.com> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:46:44 -0000 on 04/04/2013 19:43 Jay Borkenhagen said the following: > Hi Andriy, > > Thanks for your response. > > Andriy Gapon writes: > > > > My first suggestion would be to try an image with recent stable/9 > > if you can find or produce it. > > A large part of my interest here is in helping Niclas perfect his > GPT/ZFS installation instructions, so I'll not pursue the stable/9 > approach at this time. (I have two GPT/ZFS systems running > 9.1-RELEASE based on earlier instructions from Niclas, so if I > absolutely need another such system now I do have a way to get there.) > > > > Failing that, you can set vfs.zfs.debug=1 at loader prompt before > > booting. That could shed some light on what is going wrong. The > > most likely possibility is that /boot/zfs/zpool.cache in the > > boot/root filesystem of the boot/root pool does not have an entry > > for the root pool. > > I just tried 'set vfs.zfs.debug=1' then 'boot' at the loader prompt, > and it seems I wound up at the exact same place with no further debug It's not further, it's before that. > diagnostics. I believe the important part of that error output is > this: > > =============== > Trying to mount root from zfs:zroot/ROOT []... > Mounting from zfs:zroot/ROOT failed with error 2. > > Loader variables: > vfs.root.mountfrom=zfs:zroot/ROOT > =============== > > If there's something else you'd like me to specify to the boot loader, > please let me know and I will give it a try today. > > > > Further, I believe that instructions on the Niclas' page won't > > result in a bootable pool with 9.0 or 9.1. With stable/9 they > > should work. The problem is that zpool.cache is not populated at > > all. You can try to specify cachefile property to zpool create and ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > then copy zpool.cache to boot/zfs/ on the newly created pool. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > I would be willing to attempt a re-install using Niclas' instructions > plus something to populate the zpool.cache. Can you (or Niclas) > suggest what command(s) to add to the process at which stage? I think I did? -- Andriy Gapon From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 16:55:16 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D0D03E5C for ; Thu, 4 Apr 2013 16:55:16 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [174.136.100.66]) by mx1.freebsd.org (Postfix) with ESMTP id BCFFEDA for ; Thu, 4 Apr 2013 16:55:16 +0000 (UTC) Received: from chombo.houseloki.net (c-76-27-220-79.hsd1.wa.comcast.net [76.27.220.79]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id A95F9E629E; Thu, 4 Apr 2013 09:55:10 -0700 (PDT) Received: from [IPv6:fc00:970::e812:4ecc:5220:8206] (unknown [IPv6:fc00:970::e812:4ecc:5220:8206]) by chombo.houseloki.net (Postfix) with ESMTPSA id 4B26F7AB; Thu, 4 Apr 2013 09:55:08 -0700 (PDT) Message-ID: <515DB070.1090803@bluerosetech.com> Date: Thu, 04 Apr 2013 09:55:12 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: Jay Borkenhagen Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> In-Reply-To: <20825.59038.104304.161698@oz.mt.att.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 16:55:16 -0000 Reboot using the install disk, go to the Live CD. Import the pool using an altroot and a cachefile: zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot Copy /tmp/zpool.cache to /mnt/ROOT/boot/zfs/zpool.cache, then reboot *without* exporting the pool. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 18:55:21 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 50660AAF; Thu, 4 Apr 2013 18:55:21 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo05.seg.att.com (nbfkord-smmo05.seg.att.com [209.65.160.92]) by mx1.freebsd.org (Postfix) with ESMTP id EC09B816; Thu, 4 Apr 2013 18:55:20 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO nbfkord-smmo05.seg.att.com) by nbfkord-smmo05.seg.att.com(mxl_mta-6.15.0-1) with ESMTP id 99ccd515.2aaae8833940.273461.00-503.765225.nbfkord-smmo05.seg.att.com (envelope-from ); Thu, 04 Apr 2013 18:55:21 +0000 (UTC) X-MXL-Hash: 515dcc990299f1f0-dfa443e37e7671bbce752f03c2a32fd29d335913 Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo05.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id 79ccd515.0.273453.00-449.765169.nbfkord-smmo05.seg.att.com (envelope-from ); Thu, 04 Apr 2013 18:55:19 +0000 (UTC) X-MXL-Hash: 515dcc9774ea82ec-efa1d93722769ffbf134169af7f58aa9a00214c2 Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34ItI3x007596; Thu, 4 Apr 2013 14:55:19 -0400 Received: from alpi131.aldc.att.com (alpi131.aldc.att.com [130.8.218.69]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34ItCs0007531 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 4 Apr 2013 14:55:16 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi131.aldc.att.com (RSA Interceptor); Thu, 4 Apr 2013 19:54:59 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34IsxUH004268; Thu, 4 Apr 2013 14:54:59 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34IssQb004015; Thu, 4 Apr 2013 14:54:55 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id EAE3468085A; Thu, 4 Apr 2013 14:54:53 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <20829.52349.314652.424391@oz.mt.att.com> Date: Thu, 4 Apr 2013 14:54:53 -0400 From: Jay Borkenhagen To: Darren Pilgrim Subject: Re: mounting failed with error 2 In-Reply-To: <515DB070.1090803@bluerosetech.com> References: <20825.59038.104304.161698@oz.mt.att.com> <515DB070.1090803@bluerosetech.com> X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: fs@freebsd.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 18:55:21 -0000 Darren Pilgrim writes: > Reboot using the install disk, go to the Live CD. Import the pool using > an altroot and a cachefile: > > zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot > > Copy /tmp/zpool.cache to /mnt/ROOT/boot/zfs/zpool.cache, then reboot > *without* exporting the pool. Thanks, Darren. However, when I try that I get this: --------------- root@:/root # zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot failed to open cache file: No such file or directory cannot import 'zroot': no such pool available root@:/root # --------------- I used a Live CD boot again and tried this: --------------- root@:/root # zpool status ZFS filesystem version 5 ZFS storage pool version 28 no pools available root@:/root # --------------- Trying this a different way: Do you see something that Niclas's directions are missing? I.e rather than trying to patch a mis-installed system, what should be done differently to build a system that boots correctly the first time? Thanks for your help! Jay B. From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 18:56:47 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 220BFB92; Thu, 4 Apr 2013 18:56:47 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo06.seg.att.com (nbfkord-smmo06.seg.att.com [209.65.160.94]) by mx1.freebsd.org (Postfix) with ESMTP id AD4E1831; Thu, 4 Apr 2013 18:56:46 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo06.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id 8eccd515.0.266831.00-221.746035.nbfkord-smmo06.seg.att.com (envelope-from ); Thu, 04 Apr 2013 18:56:46 +0000 (UTC) X-MXL-Hash: 515dccee00208ab1-f8ea9c7ddb82b0249fdc8d94503f300bf0414b21 Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34IuekU008851; Thu, 4 Apr 2013 14:56:40 -0400 Received: from alpi132.aldc.att.com (alpi132.aldc.att.com [130.8.217.2]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r34IuT7R008736 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 4 Apr 2013 14:56:32 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi132.aldc.att.com (RSA Interceptor); Thu, 4 Apr 2013 19:56:15 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34IuFXs007383; Thu, 4 Apr 2013 14:56:15 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r34IuCJA007282; Thu, 4 Apr 2013 14:56:12 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id 9D79268085A; Thu, 4 Apr 2013 14:56:11 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) Message-ID: <20829.52426.421138.520723@oz.mt.att.com> Date: Thu, 4 Apr 2013 14:56:10 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit From: Jay Borkenhagen To: Andriy Gapon Subject: Re: mounting failed with error 2 In-Reply-To: <515DAE70.6090502@FreeBSD.org> References: <20825.59038.104304.161698@oz.mt.att.com> <5159F0C9.9000302@FreeBSD.org> <20825.62329.379765.344231@oz.mt.att.com> <515DA43D.7070805@FreeBSD.org> <20829.44475.910011.453770@oz.mt.att.com> <515DAE70.6090502@FreeBSD.org> X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: fs@FreeBSD.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 18:56:47 -0000 Andriy Gapon writes: > > > Failing that, you can set vfs.zfs.debug=1 at loader prompt before > > > booting. That could shed some light on what is going wrong. The > > > most likely possibility is that /boot/zfs/zpool.cache in the > > > boot/root filesystem of the boot/root pool does not have an entry > > > for the root pool. > > > > I just tried 'set vfs.zfs.debug=1' then 'boot' at the loader prompt, > > and it seems I wound up at the exact same place with no further debug > > It's not further, it's before that. I could not see any difference in the output to the console by having done 'set vfs.zfs.debug=1' -- not before, not after. > > diagnostics. I believe the important part of that error output is > > this: > > > > =============== > > Trying to mount root from zfs:zroot/ROOT []... > > Mounting from zfs:zroot/ROOT failed with error 2. > > > > Loader variables: > > vfs.root.mountfrom=zfs:zroot/ROOT > > =============== > > From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 19:47:21 2013 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4F040A02 for ; Thu, 4 Apr 2013 19:47:21 +0000 (UTC) (envelope-from ken@kdm.org) Received: from nargothrond.kdm.org (nargothrond.kdm.org [70.56.43.81]) by mx1.freebsd.org (Postfix) with ESMTP id 10F1EA01 for ; Thu, 4 Apr 2013 19:47:19 +0000 (UTC) Received: from nargothrond.kdm.org (localhost [127.0.0.1]) by nargothrond.kdm.org (8.14.2/8.14.2) with ESMTP id r34JlJhO079533 for ; Thu, 4 Apr 2013 13:47:19 -0600 (MDT) (envelope-from ken@nargothrond.kdm.org) Received: (from ken@localhost) by nargothrond.kdm.org (8.14.2/8.14.2/Submit) id r34JlJaO079532 for fs@FreeBSD.org; Thu, 4 Apr 2013 13:47:19 -0600 (MDT) (envelope-from ken) Date: Thu, 4 Apr 2013 13:47:19 -0600 From: "Kenneth D. Merry" To: fs@FreeBSD.org Subject: NFS File Handle Affinity ported to new NFS server Message-ID: <20130404194719.GA79482@nargothrond.kdm.org> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="LQksG6bCIzRHxTLp" Content-Disposition: inline User-Agent: Mutt/1.4.2i X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 19:47:21 -0000 --LQksG6bCIzRHxTLp Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi folks, I have ported the old NFS server's File Handle Affinity (FHA) code so that it works with both the old and new NFS servers. This sped up sequential reads from ZFS very significantly in my test scenarios. e.g. a single stream read off of an 8-drive RAIDZ2 went from about 75MB/sec to over 200MB/sec. And with 7 read streams from 7 Linux clients coming off of a 36-drive RAID-10, I went from about 700-800MB/sec to 1.7GB/sec. (This is over 10Gb ethernet, with 2 aggregated 10Gb ports on the server end.) The reason for the speedup is that Linux is doing a lot of prefetching, and those read requests all wound up going to separate threads in the NFS server. That confused the ZFS read prefetch code, and caused it to throw away a lot of data. The write speed into ZFS with this version of the FHA code is similar before and after. One change I made was to allow multithreading writes, since ZFS can take advantage of that. Rick has already reviewed the patch, but any comments or testing would be welcome. I have attached my internal commit messages and the patches against FreeBSD/head as of March 28th. Thanks! Ken -- Kenneth Merry ken@FreeBSD.ORG --LQksG6bCIzRHxTLp Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="nfs_fha_commitmsg.20130329.txt" Change 662969 by kenm@ken.spectrabsd on 2013/03/28 16:34:44 Bring in change 662512 from //SpectraBSD/stable: Revamp the old NFS server's File Handle Affinity (FHA) code so that it will work with either the old or new server. The FHA code keeps a cache of currently active file handles for NFSv2 and v3 requests, so that read and write requests for the same file are directed to the same group of threads (reads) or thread (writes). It does not currently work for NFSv4 requests. They are more complex, and will take more work to support. This improves read-ahead performance, especially with ZFS, if the FHA tuning parameters are configured appropriately. Without the FHA code, concurrent reads that are part of a sequential read from a file will be directed to separate NFS threads. This has the effect of confusing the ZFS zfetch (prefetch) code and makes sequential reads significantly slower. This also improves sequential write performance in ZFS, because writes to a file are single-threaded. Since NFS writes (generally less than 64K) are smaller than the typical ZFS record size (usually 128K), out of order NFS writes to the same block can trigger a read in ZFS. Sending them down the same thread increases the odds of their being in order. I have changed the default tuning parameters to a 22 bit (4MB) window size (from 256K) and unlimited commands per thread as a result of my benchmarking with ZFS. We may need to tweak this further with more testing. The FHA code has been updated to allow configuring the tuning parameters from loader tunable variables in addition to sysctl variables. The read offset window calculation has been slightly modified as well. Instead of having separate bins, each file handle has a rolling window of bin_shift size. This minimizes glitches in throughput when shifting from one bin to another. sys/conf/files: Add nfs_fha_new.c and nfs_fha_old.c. Compile nfs_fha.c when either the old or the new NFS server is built. sys/fs/nfs/nfsport.h, sys/fs/nfs/nfs_commonport.c: Bring in changes from Rick Macklem to newnfs_realign that allow it to operate in blocking (M_WAITOK) or non-blocking (M_DONTWAIT) mode. sys/fs/nfs/nfs_commonsubs.c, sys/fs/nfs/nfs_var.h: Bring in a change from Rick Macklem to allow telling nfsm_dissect() whether or not to wait for mallocs. sys/fs/nfs/nfsm_subs.h: Bring in changes from Rick Macklem to ceate a new nfsm_dissect_nonblock() inline function and NFSM_DISSECT_NONBLOCK() macro. sys/fs/nfs/nfs_commonkrpc.c, sys/fs/nfsclient/nfs_clkrpc.c: Add the malloc wait flag to a newnfs_realign() call. sys/fs/nfsserver/nfs_nfsdkrpc.c: Setup the new NFS server's RPC thread pool so that it will call the FHA code. Add the malloc flag argument to newnfs_realign(). Unstaticize newnfs_nfsv3_procid[] so that we can use it in the FHA code. sys/nfsserver/nfs_fha.c: Remove all code that is specific to the NFS server implementation. Anything that is server-specific is now accessed through a callback supplied by that server's FHA shim in the new softc. There are now separate sysctls and tunables for the FHA implementations for the old and new NFS servers. The new NFS server has its tunables under vfs.nfsd.fha, the old NFS server's tunables are under vfs.nfsrv.fha as before. In fha_extract_info(), use callouts for all server-specific code. Getting file handles and offsets is now done in the individual server's shim module. In fha_hash_entry_choose_thread(), change the way we decide whether two reads are in proximity to each other. Previously, the calculation was a simple shift operation to see whether the offsets were in the same power of 2 bucket. The issue was that there would be a bucket (and therefore thread) transition, even if the reads were in close proximity. When there is a thread transition, reads wind up going somewhat out of order, and ZFS gets confused. The new calculation simply tries to see whether the offsets are within 1 << bin_shift of each other. If they are, the reads will be sent to the same thread. The effect of this change is that for sequential reads, if the client doesn't exceed the max_reqs_per_nfsd parameter and the bin_shift is set to a reasonable value (22, or 4MB works well in my tests), the reads in any sequential stream will largely be confined to a single thread. Change fha_assign() so that it takes a softc argument. It is now called from the individual server's shim code, which will pass in the softc. Change fhe_stats_sysctl() so that it takes a softc parameter. It is now called from the individual server's shim code. Add the current offset to the list of things printed out about each active thread. Add an enable sysctl and tunable that allows the user to disable the FHA code (when vfs.XXX.fha.enable = 0). This is useful for before/after performance comparisons. nfs_fha.h: Move most structure definitions out of nfs_fha.c and into the header file, so that the individual server shims can see them. Change the default bin_shift to 22 (4MB) instead of 18 (256K). Allow unlimited commands per thread. sys/nfsserver/nfs_fha_old.c, sys/nfsserver/nfs_fha_old.h, sys/fs/nfsserver/nfs_fha_new.c, sys/fs/nfsserver/nfs_fha_new.h: Add shims for the old and new NFS servers to interface with the FHA code, and callbacks for the The shims contain all of the code and definitions that are specific to the NFS servers. They setup the server-specific callbacks and set the server name for the sysctl and loader tunable variables. sys/nfsserver/nfs_srvkrpc.c: Configure the RPC code to call fhaold_assign() instead of fha_assign(). sys/modules/nfsd/Makefile: Add nfs_fha.c and nfs_fha_new.c. sys/modules/nfsserver/Makefile: Add nfs_fha_old.c. Affected files ... ... //depot/users/kenm/FreeBSD-test5/sys/conf/files#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfs_commonkrpc.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfs_commonport.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfs_commonsubs.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfs_var.h#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfsm_subs.h#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfs/nfsport.h#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfsclient/nfs_clkrpc.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_fha_new.c#1 branch ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_fha_new.h#1 branch ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_nfsdkrpc.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/modules/nfsd/Makefile#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/modules/nfsserver/Makefile#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha.h#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha_old.c#1 branch ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha_old.h#1 branch ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_srvkrpc.c#2 integrate Change 662971 by kenm@ken.spectrabsd on 2013/03/28 16:36:05 Merge change 662667 from //SpectraBSD/stable: Allow parallel writes in the FHA code and fix a bug in the way requests were decoded. The FHA (File Handle Affinity) code previously would execute only one write request at a time per file handle, but could execute multiple reads at one time for a particular file handle. This was done originally because of the performance characteristics of Isilon's filesystem. ZFS can efficiently handle parallel read and write requests (within limits), and so that limitation doesn't apply. With a single thread per file for writes, our write performance for 7 clients writing sequentially with FHA enabled was about half the speed as with FHA disabled. This change brings the two cases (with and without FHA) to the same level of performance (approximately 590MB/sec with a 36-drive mirror). There is still more performance investigation and tuning to be done, since write performance in my test configuration is significantly lower than read performance (which is in the 1.5-1.9GB/sec range). nfs_fha.h: Add a new callback, 'is_write()', which the FHA shim layers need to use. Change the num_reads and num_writes counters in the fha_hash_entry structure to 32-bit values, and rename them num_rw and num_exclusive, respectively, to reflect their changed usage. nfs_fha.c: In fha_extract_info(), get the offset for reads as well as writes. (We determine a write with the new is_write() callback.) Rename num_reads -> num_rw and num_writes -> num_exclusive. nfs_fha_old.c: Add an is_write() routine, and make writes a shared operation, not an exclusive operation. nfs_fha_new.c: Add an is_write() routine, and make writes a shared operation, not an exclusive operation. Fix the way we handle the mbuf pointer and the data cursor in fhanew_get_fh() and fhanew_get_offset(). They weren't properly handling the case where the mbuf chain gets reallocated. That wasn't happening when we tried to decode reads, but the way write requests were laid out in mbufs led to the mbuf getting reallocated, and exposed the bug. We also weren't properly handling data cursor updates. So, in both functions, rely in the dissect routine to update the mbuf pointer (nd->nd_md) and data cursor (nd->nd_dpos) and update the passed in md and dpos pointers with the results of the dissection. Affected files ... ... //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_fha_new.c#2 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha.c#3 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha.h#3 integrate ... //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha_old.c#2 integrate --LQksG6bCIzRHxTLp Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="nfs_fha.20130329.2.txt" *** src/sys/conf/files.orig --- src/sys/conf/files *************** *** 2409,2414 **** --- 2409,2415 ---- fs/nfsclient/nfs_clport.c optional nfscl fs/nfsclient/nfs_clbio.c optional nfscl fs/nfsclient/nfs_clnfsiod.c optional nfscl + fs/nfsserver/nfs_fha_new.c optional nfsd inet fs/nfsserver/nfs_nfsdsocket.c optional nfsd inet fs/nfsserver/nfs_nfsdsubs.c optional nfsd inet fs/nfsserver/nfs_nfsdstate.c optional nfsd inet *************** *** 3213,3219 **** nfsclient/nfs_nfsiod.c optional nfsclient nfsclient/nfs_vfsops.c optional nfsclient nfsclient/nfs_vnops.c optional nfsclient ! nfsserver/nfs_fha.c optional nfsserver nfsserver/nfs_serv.c optional nfsserver nfsserver/nfs_srvkrpc.c optional nfsserver nfsserver/nfs_srvsubs.c optional nfsserver --- 3214,3221 ---- nfsclient/nfs_nfsiod.c optional nfsclient nfsclient/nfs_vfsops.c optional nfsclient nfsclient/nfs_vnops.c optional nfsclient ! nfsserver/nfs_fha.c optional nfsserver | nfsd ! nfsserver/nfs_fha_old.c optional nfsserver nfsserver/nfs_serv.c optional nfsserver nfsserver/nfs_srvkrpc.c optional nfsserver nfsserver/nfs_srvsubs.c optional nfsserver *** src/sys/fs/nfs/nfs_commonkrpc.c.orig --- src/sys/fs/nfs/nfs_commonkrpc.c *************** *** 797,803 **** * These could cause pointer alignment problems, so copy them to * well aligned mbufs. */ ! newnfs_realign(&nd->nd_mrep); nd->nd_md = nd->nd_mrep; nd->nd_dpos = NFSMTOD(nd->nd_md, caddr_t); nd->nd_repstat = 0; --- 797,803 ---- * These could cause pointer alignment problems, so copy them to * well aligned mbufs. */ ! newnfs_realign(&nd->nd_mrep, M_WAITOK); nd->nd_md = nd->nd_mrep; nd->nd_dpos = NFSMTOD(nd->nd_md, caddr_t); nd->nd_repstat = 0; *** src/sys/fs/nfs/nfs_commonport.c.orig --- src/sys/fs/nfs/nfs_commonport.c *************** *** 132,142 **** /* * These architectures don't need re-alignment, so just return. */ ! void ! newnfs_realign(struct mbuf **pm) { ! return; } #else /* !__NO_STRICT_ALIGNMENT */ /* --- 132,142 ---- /* * These architectures don't need re-alignment, so just return. */ ! int ! newnfs_realign(struct mbuf **pm, int how) { ! return (0); } #else /* !__NO_STRICT_ALIGNMENT */ /* *************** *** 155,162 **** * with TCP. Use vfs.nfs.realign_count and realign_test to check this. * */ ! void ! newnfs_realign(struct mbuf **pm) { struct mbuf *m, *n; int off, space; --- 155,162 ---- * with TCP. Use vfs.nfs.realign_count and realign_test to check this. * */ ! int ! newnfs_realign(struct mbuf **pm, int how) { struct mbuf *m, *n; int off, space; *************** *** 173,183 **** space = m_length(m, NULL); if (space >= MINCLSIZE) { /* NB: m_copyback handles space > MCLBYTES */ ! n = m_getcl(M_WAITOK, MT_DATA, 0); } else ! n = m_get(M_WAITOK, MT_DATA); if (n == NULL) ! return; /* * Align the remainder of the mbuf chain. */ --- 173,183 ---- space = m_length(m, NULL); if (space >= MINCLSIZE) { /* NB: m_copyback handles space > MCLBYTES */ ! n = m_getcl(how, MT_DATA, 0); } else ! n = m_get(how, MT_DATA); if (n == NULL) ! return (ENOMEM); /* * Align the remainder of the mbuf chain. */ *************** *** 195,200 **** --- 195,202 ---- } pm = &m->m_next; } + + return (0); } #endif /* __NO_STRICT_ALIGNMENT */ *** src/sys/fs/nfs/nfs_commonsubs.c.orig --- src/sys/fs/nfs/nfs_commonsubs.c *************** *** 271,277 **** * cases. */ APPLESTATIC void * ! nfsm_dissct(struct nfsrv_descript *nd, int siz) { mbuf_t mp2; int siz2, xfer; --- 271,277 ---- * cases. */ APPLESTATIC void * ! nfsm_dissct(struct nfsrv_descript *nd, int siz, int how) { mbuf_t mp2; int siz2, xfer; *************** *** 296,302 **** } else if (siz > ncl_mbuf_mhlen) { panic("nfs S too big"); } else { ! NFSMGET(mp2); mbuf_setnext(mp2, mbuf_next(nd->nd_md)); mbuf_setnext(nd->nd_md, mp2); mbuf_setlen(nd->nd_md, mbuf_len(nd->nd_md) - left); --- 296,304 ---- } else if (siz > ncl_mbuf_mhlen) { panic("nfs S too big"); } else { ! MGET(mp2, MT_DATA, how); ! if (mp2 == NULL) ! return (NULL); mbuf_setnext(mp2, mbuf_next(nd->nd_md)); mbuf_setnext(nd->nd_md, mp2); mbuf_setlen(nd->nd_md, mbuf_len(nd->nd_md) - left); *** src/sys/fs/nfs/nfs_var.h.orig --- src/sys/fs/nfs/nfs_var.h *************** *** 235,241 **** int nfsm_mbufuio(struct nfsrv_descript *, struct uio *, int); int nfsm_fhtom(struct nfsrv_descript *, u_int8_t *, int, int); int nfsm_advance(struct nfsrv_descript *, int, int); ! void *nfsm_dissct(struct nfsrv_descript *, int); void newnfs_trimleading(struct nfsrv_descript *); void newnfs_trimtrailing(struct nfsrv_descript *, mbuf_t, caddr_t); --- 235,241 ---- int nfsm_mbufuio(struct nfsrv_descript *, struct uio *, int); int nfsm_fhtom(struct nfsrv_descript *, u_int8_t *, int, int); int nfsm_advance(struct nfsrv_descript *, int, int); ! void *nfsm_dissct(struct nfsrv_descript *, int, int); void newnfs_trimleading(struct nfsrv_descript *); void newnfs_trimtrailing(struct nfsrv_descript *, mbuf_t, caddr_t); *** src/sys/fs/nfs/nfsm_subs.h.orig --- src/sys/fs/nfs/nfsm_subs.h *************** *** 100,106 **** retp = (void *)nd->nd_dpos; nd->nd_dpos += siz; } else { ! retp = nfsm_dissct(nd, siz); } return (retp); } --- 100,122 ---- retp = (void *)nd->nd_dpos; nd->nd_dpos += siz; } else { ! retp = nfsm_dissct(nd, siz, M_WAITOK); ! } ! return (retp); ! } ! ! static __inline void * ! nfsm_dissect_nonblock(struct nfsrv_descript *nd, int siz) ! { ! int tt1; ! void *retp; ! ! tt1 = NFSMTOD(nd->nd_md, caddr_t) + nd->nd_md->m_len - nd->nd_dpos; ! if (tt1 >= siz) { ! retp = (void *)nd->nd_dpos; ! nd->nd_dpos += siz; ! } else { ! retp = nfsm_dissct(nd, siz, M_NOWAIT); } return (retp); } *************** *** 113,118 **** --- 129,143 ---- goto nfsmout; \ } \ } while (0) + + #define NFSM_DISSECT_NONBLOCK(a, c, s) \ + do { \ + (a) = (c)nfsm_dissect_nonblock(nd, (s)); \ + if ((a) == NULL) { \ + error = EBADRPC; \ + goto nfsmout; \ + } \ + } while (0) #endif /* !APPLE */ #define NFSM_STRSIZ(s, m) \ *** src/sys/fs/nfs/nfsport.h.orig --- src/sys/fs/nfs/nfsport.h *************** *** 806,812 **** */ int nfscl_loadattrcache(struct vnode **, struct nfsvattr *, void *, void *, int, int); ! void newnfs_realign(struct mbuf **); /* * If the port runs on an SMP box that can enforce Atomic ops with low --- 806,812 ---- */ int nfscl_loadattrcache(struct vnode **, struct nfsvattr *, void *, void *, int, int); ! int newnfs_realign(struct mbuf **, int); /* * If the port runs on an SMP box that can enforce Atomic ops with low *** src/sys/fs/nfsclient/nfs_clkrpc.c.orig --- src/sys/fs/nfsclient/nfs_clkrpc.c *************** *** 83,89 **** */ nd.nd_mrep = rqst->rq_args; rqst->rq_args = NULL; ! newnfs_realign(&nd.nd_mrep); nd.nd_md = nd.nd_mrep; nd.nd_dpos = mtod(nd.nd_md, caddr_t); nd.nd_nam = svc_getrpccaller(rqst); --- 83,89 ---- */ nd.nd_mrep = rqst->rq_args; rqst->rq_args = NULL; ! newnfs_realign(&nd.nd_mrep, M_WAITOK); nd.nd_md = nd.nd_mrep; nd.nd_dpos = mtod(nd.nd_md, caddr_t); nd.nd_nam = svc_getrpccaller(rqst); *** /dev/null Fri Mar 29 13:26:00 2013 --- src/sys/fs/nfsserver/nfs_fha_new.c Fri Mar 29 13:26:45 2013 *************** *** 0 **** --- 1,272 ---- + /*- + * Copyright (c) 2008 Isilon Inc http://www.isilon.com/ + * Copyright (c) 2013 Spectra Logic Corporation + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + + #include + __FBSDID("$FreeBSD$"); + + #include + + #include + #include + #include + #include + #include + #include + #include + + static void fhanew_init(void *foo); + static void fhanew_uninit(void *foo); + rpcproc_t fhanew_get_procnum(rpcproc_t procnum); + int fhanew_realign(struct mbuf **mb, int malloc_flags); + int fhanew_get_fh(fhandle_t *fh, int v3, struct mbuf **md, caddr_t *dpos); + int fhanew_is_read(rpcproc_t procnum); + int fhanew_is_write(rpcproc_t procnum); + int fhanew_get_offset(struct mbuf **md, caddr_t *dpos, int v3, + struct fha_info *info); + int fhanew_no_offset(rpcproc_t procnum); + void fhanew_set_locktype(rpcproc_t procnum, struct fha_info *info); + static int fhenew_stats_sysctl(SYSCTL_HANDLER_ARGS); + + static struct fha_params fhanew_softc; + + SYSCTL_DECL(_vfs_nfsd); + + extern int newnfs_nfsv3_procid[]; + extern SVCPOOL *nfsrvd_pool; + + SYSINIT(nfs_fhanew, SI_SUB_ROOT_CONF, SI_ORDER_ANY, fhanew_init, NULL); + SYSUNINIT(nfs_fhanew, SI_SUB_ROOT_CONF, SI_ORDER_ANY, fhanew_uninit, NULL); + + static void + fhanew_init(void *foo) + { + struct fha_params *softc; + + softc = &fhanew_softc; + + bzero(softc, sizeof(*softc)); + + /* + * Setup the callbacks for this FHA personality. + */ + softc->callbacks.get_procnum = fhanew_get_procnum; + softc->callbacks.realign = fhanew_realign; + softc->callbacks.get_fh = fhanew_get_fh; + softc->callbacks.is_read = fhanew_is_read; + softc->callbacks.is_write = fhanew_is_write; + softc->callbacks.get_offset = fhanew_get_offset; + softc->callbacks.no_offset = fhanew_no_offset; + softc->callbacks.set_locktype = fhanew_set_locktype; + softc->callbacks.fhe_stats_sysctl = fhenew_stats_sysctl; + + snprintf(softc->server_name, sizeof(softc->server_name), + FHANEW_SERVER_NAME); + + softc->pool = &nfsrvd_pool; + + /* + * Initialize the sysctl context list for the fha module. + */ + sysctl_ctx_init(&softc->sysctl_ctx); + softc->sysctl_tree = SYSCTL_ADD_NODE(&softc->sysctl_ctx, + SYSCTL_STATIC_CHILDREN(_vfs_nfsd), OID_AUTO, "fha", CTLFLAG_RD, + 0, "fha node"); + if (softc->sysctl_tree == NULL) { + printf("%s: unable to allocate sysctl tree\n", __func__); + return; + } + + fha_init(softc); + } + + static void + fhanew_uninit(void *foo) + { + struct fha_params *softc; + + softc = &fhanew_softc; + + fha_uninit(softc); + } + + rpcproc_t + fhanew_get_procnum(rpcproc_t procnum) + { + if (procnum > NFSV2PROC_STATFS) + return (-1); + + return (newnfs_nfsv3_procid[procnum]); + } + + int + fhanew_realign(struct mbuf **mb, int malloc_flags) + { + return (newnfs_realign(mb, malloc_flags)); + } + + int + fhanew_get_fh(fhandle_t *fh, int v3, struct mbuf **md, caddr_t *dpos) + { + struct nfsrv_descript lnd, *nd; + uint32_t *tl; + int error, len; + + error = 0; + len = 0; + nd = &lnd; + + nd->nd_md = *md; + nd->nd_dpos = *dpos; + + if (v3) { + NFSM_DISSECT_NONBLOCK(tl, uint32_t *, NFSX_UNSIGNED); + if ((len = fxdr_unsigned(int, *tl)) <= 0 || len > NFSX_FHMAX) { + error = EBADRPC; + goto nfsmout; + } + } else { + len = NFSX_V2FH; + } + + if (len != 0) { + NFSM_DISSECT_NONBLOCK(tl, uint32_t *, len); + bcopy(tl, fh, len); + } else + bzero(fh, sizeof(*fh)); + + nfsmout: + *md = nd->nd_md; + *dpos = nd->nd_dpos; + + return (error); + } + + int + fhanew_is_read(rpcproc_t procnum) + { + if (procnum == NFSPROC_READ) + return (1); + else + return (0); + } + + int + fhanew_is_write(rpcproc_t procnum) + { + if (procnum == NFSPROC_WRITE) + return (1); + else + return (0); + } + + int + fhanew_get_offset(struct mbuf **md, caddr_t *dpos, int v3, + struct fha_info *info) + { + struct nfsrv_descript lnd, *nd; + uint32_t *tl; + int error; + + error = 0; + + nd = &lnd; + nd->nd_md = *md; + nd->nd_dpos = *dpos; + + if (v3) { + NFSM_DISSECT_NONBLOCK(tl, uint32_t *, 2 * NFSX_UNSIGNED); + info->offset = fxdr_hyper(tl); + } else { + NFSM_DISSECT_NONBLOCK(tl, uint32_t *, NFSX_UNSIGNED); + info->offset = fxdr_unsigned(uint32_t, *tl); + } + + nfsmout: + *md = nd->nd_md; + *dpos = nd->nd_dpos; + + return (error); + } + + int + fhanew_no_offset(rpcproc_t procnum) + { + if (procnum == NFSPROC_FSSTAT || + procnum == NFSPROC_FSINFO || + procnum == NFSPROC_PATHCONF || + procnum == NFSPROC_NOOP || + procnum == NFSPROC_NULL) + return (1); + else + return (0); + } + + void + fhanew_set_locktype(rpcproc_t procnum, struct fha_info *info) + { + switch (procnum) { + case NFSPROC_NULL: + case NFSPROC_GETATTR: + case NFSPROC_LOOKUP: + case NFSPROC_ACCESS: + case NFSPROC_READLINK: + case NFSPROC_READ: + case NFSPROC_READDIR: + case NFSPROC_READDIRPLUS: + case NFSPROC_WRITE: + info->locktype = LK_SHARED; + break; + case NFSPROC_SETATTR: + case NFSPROC_CREATE: + case NFSPROC_MKDIR: + case NFSPROC_SYMLINK: + case NFSPROC_MKNOD: + case NFSPROC_REMOVE: + case NFSPROC_RMDIR: + case NFSPROC_RENAME: + case NFSPROC_LINK: + case NFSPROC_FSSTAT: + case NFSPROC_FSINFO: + case NFSPROC_PATHCONF: + case NFSPROC_COMMIT: + case NFSPROC_NOOP: + info->locktype = LK_EXCLUSIVE; + break; + } + } + + static int + fhenew_stats_sysctl(SYSCTL_HANDLER_ARGS) + { + return (fhe_stats_sysctl(oidp, arg1, arg2, req, &fhanew_softc)); + } + + + SVCTHREAD * + fhanew_assign(SVCTHREAD *this_thread, struct svc_req *req) + { + return (fha_assign(this_thread, req, &fhanew_softc)); + } ==== - //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_fha_new.c#2 ==== *** /dev/null Fri Mar 29 13:26:00 2013 --- src/sys/fs/nfsserver/nfs_fha_new.h Fri Mar 29 13:26:45 2013 *************** *** 0 **** --- 1,39 ---- + /*- + * Copyright (c) 2008 Isilon Inc http://www.isilon.com/ + * Copyright (c) 2013 Spectra Logic Corporation + * + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + /* $FreeBSD$ */ + + #ifndef _NFS_FHA_NEW_H + #define _NFS_FHA_NEW_H 1 + + #ifdef _KERNEL + + #define FHANEW_SERVER_NAME "nfsd" + + SVCTHREAD *fhanew_assign(SVCTHREAD *this_thread, struct svc_req *req); + #endif /* _KERNEL */ + + #endif /* _NFS_FHA_NEW_H */ ==== - //depot/users/kenm/FreeBSD-test5/sys/fs/nfsserver/nfs_fha_new.h#1 ==== *** src/sys/fs/nfsserver/nfs_nfsdkrpc.c.orig --- src/sys/fs/nfsserver/nfs_nfsdkrpc.c *************** *** 42,47 **** --- 42,50 ---- #include #include + #include + #include + #include NFSDLOCKMUTEX; *************** *** 51,57 **** /* * Mapping of old NFS Version 2 RPC numbers to generic numbers. */ ! static int newnfs_nfsv3_procid[NFS_V3NPROCS] = { NFSPROC_NULL, NFSPROC_GETATTR, NFSPROC_SETATTR, --- 54,60 ---- /* * Mapping of old NFS Version 2 RPC numbers to generic numbers. */ ! int newnfs_nfsv3_procid[NFS_V3NPROCS] = { NFSPROC_NULL, NFSPROC_GETATTR, NFSPROC_SETATTR, *************** *** 147,153 **** */ nd.nd_mrep = rqst->rq_args; rqst->rq_args = NULL; ! newnfs_realign(&nd.nd_mrep); nd.nd_md = nd.nd_mrep; nd.nd_dpos = mtod(nd.nd_md, caddr_t); nd.nd_nam = svc_getrpccaller(rqst); --- 150,156 ---- */ nd.nd_mrep = rqst->rq_args; rqst->rq_args = NULL; ! newnfs_realign(&nd.nd_mrep, M_WAITOK); nd.nd_md = nd.nd_mrep; nd.nd_dpos = mtod(nd.nd_md, caddr_t); nd.nd_nam = svc_getrpccaller(rqst); *************** *** 491,498 **** nfsrvd_pool = svcpool_create("nfsd", SYSCTL_STATIC_CHILDREN(_vfs_nfsd)); nfsrvd_pool->sp_rcache = NULL; ! nfsrvd_pool->sp_assign = NULL; ! nfsrvd_pool->sp_done = NULL; NFSD_LOCK(); } --- 494,501 ---- nfsrvd_pool = svcpool_create("nfsd", SYSCTL_STATIC_CHILDREN(_vfs_nfsd)); nfsrvd_pool->sp_rcache = NULL; ! nfsrvd_pool->sp_assign = fhanew_assign; ! nfsrvd_pool->sp_done = fha_nd_complete; NFSD_LOCK(); } *** src/sys/modules/nfsd/Makefile.orig --- src/sys/modules/nfsd/Makefile *************** *** 1,8 **** # $FreeBSD: head/sys/modules/nfsd/Makefile 192991 2009-05-28 19:45:11Z rmacklem $ ! .PATH: ${.CURDIR}/../../fs/nfsserver KMOD= nfsd SRCS= vnode_if.h \ nfs_nfsdserv.c \ nfs_nfsdcache.c \ nfs_nfsdkrpc.c \ --- 1,10 ---- # $FreeBSD: head/sys/modules/nfsd/Makefile 192991 2009-05-28 19:45:11Z rmacklem $ ! .PATH: ${.CURDIR}/../../fs/nfsserver ${.CURDIR}/../../nfsserver KMOD= nfsd SRCS= vnode_if.h \ + nfs_fha.c \ + nfs_fha_new.c \ nfs_nfsdserv.c \ nfs_nfsdcache.c \ nfs_nfsdkrpc.c \ *** src/sys/modules/nfsserver/Makefile.orig --- src/sys/modules/nfsserver/Makefile *************** *** 3,9 **** .PATH: ${.CURDIR}/../../nfsserver KMOD= nfsserver SRCS= vnode_if.h \ ! nfs_fha.c nfs_serv.c nfs_srvkrpc.c nfs_srvsubs.c \ opt_mac.h \ opt_kgssapi.h \ opt_nfs.h --- 3,9 ---- .PATH: ${.CURDIR}/../../nfsserver KMOD= nfsserver SRCS= vnode_if.h \ ! nfs_fha.c nfs_fha_old.c nfs_serv.c nfs_srvkrpc.c nfs_srvsubs.c \ opt_mac.h \ opt_kgssapi.h \ opt_nfs.h *** src/sys/nfsserver/nfs_fha.c.orig --- src/sys/nfsserver/nfs_fha.c *************** *** 38,171 **** #include #include - #include - #include - #include - #include #include static MALLOC_DEFINE(M_NFS_FHA, "NFS FHA", "NFS FHA"); - /* Sysctl defaults. */ - #define DEF_BIN_SHIFT 18 /* 256k */ - #define DEF_MAX_NFSDS_PER_FH 8 - #define DEF_MAX_REQS_PER_NFSD 4 - - struct fha_ctls { - u_int32_t bin_shift; - u_int32_t max_nfsds_per_fh; - u_int32_t max_reqs_per_nfsd; - } fha_ctls; - - struct sysctl_ctx_list fha_clist; - - SYSCTL_DECL(_vfs_nfsrv); - SYSCTL_DECL(_vfs_nfsrv_fha); - - /* Static sysctl node for the fha from the top-level vfs_nfsrv node. */ - SYSCTL_NODE(_vfs_nfsrv, OID_AUTO, fha, CTLFLAG_RD, 0, "fha node"); - - /* This is the global structure that represents the state of the fha system. */ - static struct fha_global { - struct fha_hash_entry_list *hashtable; - u_long hashmask; - } g_fha; - /* ! * These are the entries in the filehandle hash. They talk about a specific ! * file, requests against which are being handled by one or more nfsds. We ! * keep a chain of nfsds against the file. We only have more than one if reads ! * are ongoing, and then only if the reads affect disparate regions of the ! * file. ! * ! * In general, we want to assign a new request to an existing nfsd if it is ! * going to contend with work happening already on that nfsd, or if the ! * operation is a read and the nfsd is already handling a proximate read. We ! * do this to avoid jumping around in the read stream unnecessarily, and to ! * avoid contention between threads over single files. */ ! struct fha_hash_entry { ! LIST_ENTRY(fha_hash_entry) link; ! u_int64_t fh; ! u_int16_t num_reads; ! u_int16_t num_writes; ! u_int8_t num_threads; ! struct svcthread_list threads; ! }; ! LIST_HEAD(fha_hash_entry_list, fha_hash_entry); ! /* A structure used for passing around data internally. */ ! struct fha_info { ! u_int64_t fh; ! off_t offset; ! int locktype; ! }; ! ! static int fhe_stats_sysctl(SYSCTL_HANDLER_ARGS); ! ! static void ! nfs_fha_init(void *foo) { /* * A small hash table to map filehandles to fha_hash_entry * structures. */ ! g_fha.hashtable = hashinit(256, M_NFS_FHA, &g_fha.hashmask); /* ! * Initialize the sysctl context list for the fha module. */ ! sysctl_ctx_init(&fha_clist); ! fha_ctls.bin_shift = DEF_BIN_SHIFT; ! fha_ctls.max_nfsds_per_fh = DEF_MAX_NFSDS_PER_FH; ! fha_ctls.max_reqs_per_nfsd = DEF_MAX_REQS_PER_NFSD; ! SYSCTL_ADD_UINT(&fha_clist, SYSCTL_STATIC_CHILDREN(_vfs_nfsrv_fha), OID_AUTO, "bin_shift", CTLFLAG_RW, ! &fha_ctls.bin_shift, 0, "For FHA reads, no two requests will " "contend if they're 2^(bin_shift) bytes apart"); ! SYSCTL_ADD_UINT(&fha_clist, SYSCTL_STATIC_CHILDREN(_vfs_nfsrv_fha), OID_AUTO, "max_nfsds_per_fh", CTLFLAG_RW, ! &fha_ctls.max_nfsds_per_fh, 0, "Maximum nfsd threads that " "should be working on requests for the same file handle"); ! SYSCTL_ADD_UINT(&fha_clist, SYSCTL_STATIC_CHILDREN(_vfs_nfsrv_fha), OID_AUTO, "max_reqs_per_nfsd", CTLFLAG_RW, ! &fha_ctls.max_reqs_per_nfsd, 0, "Maximum requests that " "single nfsd thread should be working on at any time"); ! SYSCTL_ADD_OID(&fha_clist, SYSCTL_STATIC_CHILDREN(_vfs_nfsrv_fha), OID_AUTO, "fhe_stats", CTLTYPE_STRING | CTLFLAG_RD, 0, 0, ! fhe_stats_sysctl, "A", ""); } ! static void ! nfs_fha_uninit(void *foo) { ! ! hashdestroy(g_fha.hashtable, M_NFS_FHA, g_fha.hashmask); } - SYSINIT(nfs_fha, SI_SUB_ROOT_CONF, SI_ORDER_ANY, nfs_fha_init, NULL); - SYSUNINIT(nfs_fha, SI_SUB_ROOT_CONF, SI_ORDER_ANY, nfs_fha_uninit, NULL); - /* * This just specifies that offsets should obey affinity when within * the same 1Mbyte (1<<20) chunk for the file (reads only for now). */ static void ! fha_extract_info(struct svc_req *req, struct fha_info *i) { struct mbuf *md; ! nfsfh_t fh; caddr_t dpos; static u_int64_t random_fh = 0; int error; int v3 = (req->rq_vers == 3); - u_int32_t *tl; rpcproc_t procnum; /* --- 38,140 ---- #include #include #include static MALLOC_DEFINE(M_NFS_FHA, "NFS FHA", "NFS FHA"); /* ! * XXX need to commonize definitions between old and new NFS code. Define ! * this here so we don't include one nfsproto.h over the other. */ ! #define NFS_PROG 100003 ! void ! fha_init(struct fha_params *softc) { + char tmpstr[128]; /* * A small hash table to map filehandles to fha_hash_entry * structures. */ ! softc->g_fha.hashtable = hashinit(256, M_NFS_FHA, ! &softc->g_fha.hashmask); ! ! /* ! * Set the default tuning parameters. ! */ ! softc->ctls.enable = FHA_DEF_ENABLE; ! softc->ctls.bin_shift = FHA_DEF_BIN_SHIFT; ! softc->ctls.max_nfsds_per_fh = FHA_DEF_MAX_NFSDS_PER_FH; ! softc->ctls.max_reqs_per_nfsd = FHA_DEF_MAX_REQS_PER_NFSD; /* ! * Allow the user to override the defaults at boot time with ! * tunables. */ ! snprintf(tmpstr, sizeof(tmpstr), "vfs.%s.fha.enable", ! softc->server_name); ! TUNABLE_INT_FETCH(tmpstr, &softc->ctls.enable); ! snprintf(tmpstr, sizeof(tmpstr), "vfs.%s.fha.bin_shift", ! softc->server_name); ! TUNABLE_INT_FETCH(tmpstr, &softc->ctls.bin_shift); ! snprintf(tmpstr, sizeof(tmpstr), "vfs.%s.fha.max_nfsds_per_fh", ! softc->server_name); ! TUNABLE_INT_FETCH(tmpstr, &softc->ctls.max_nfsds_per_fh); ! snprintf(tmpstr, sizeof(tmpstr), "vfs.%s.fha.max_reqs_per_nfsd", ! softc->server_name); ! TUNABLE_INT_FETCH(tmpstr, &softc->ctls.max_reqs_per_nfsd); ! /* ! * Add sysctls so the user can change the tuning parameters at ! * runtime. ! */ ! SYSCTL_ADD_UINT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree), ! OID_AUTO, "enable", CTLFLAG_RW, ! &softc->ctls.enable, 0, "Enable NFS File Handle Affinity (FHA)"); ! SYSCTL_ADD_UINT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "bin_shift", CTLFLAG_RW, ! &softc->ctls.bin_shift, 0, "For FHA reads, no two requests will " "contend if they're 2^(bin_shift) bytes apart"); ! SYSCTL_ADD_UINT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "max_nfsds_per_fh", CTLFLAG_RW, ! &softc->ctls.max_nfsds_per_fh, 0, "Maximum nfsd threads that " "should be working on requests for the same file handle"); ! SYSCTL_ADD_UINT(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "max_reqs_per_nfsd", CTLFLAG_RW, ! &softc->ctls.max_reqs_per_nfsd, 0, "Maximum requests that " "single nfsd thread should be working on at any time"); ! SYSCTL_ADD_OID(&softc->sysctl_ctx, SYSCTL_CHILDREN(softc->sysctl_tree), OID_AUTO, "fhe_stats", CTLTYPE_STRING | CTLFLAG_RD, 0, 0, ! softc->callbacks.fhe_stats_sysctl, "A", ""); ! } ! void ! fha_uninit(struct fha_params *softc) { ! sysctl_ctx_free(&softc->sysctl_ctx); ! hashdestroy(softc->g_fha.hashtable, M_NFS_FHA, softc->g_fha.hashmask); } /* * This just specifies that offsets should obey affinity when within * the same 1Mbyte (1<<20) chunk for the file (reads only for now). */ static void ! fha_extract_info(struct svc_req *req, struct fha_info *i, ! struct fha_callbacks *cb) { struct mbuf *md; ! fhandle_t fh; caddr_t dpos; static u_int64_t random_fh = 0; int error; int v3 = (req->rq_vers == 3); rpcproc_t procnum; /* *************** *** 184,192 **** */ procnum = req->rq_proc; if (!v3) { ! if (procnum > NFSV2PROC_STATFS) goto out; ! procnum = nfsrv_nfsv3_procid[procnum]; } /* --- 153,164 ---- */ procnum = req->rq_proc; if (!v3) { ! rpcproc_t tmp_procnum; ! ! tmp_procnum = cb->get_procnum(procnum); ! if (tmp_procnum == -1) goto out; ! procnum = tmp_procnum; } /* *************** *** 195,265 **** * only do this for reads today, but this may change when IFS supports * efficient concurrent writes. */ ! if (procnum == NFSPROC_FSSTAT || ! procnum == NFSPROC_FSINFO || ! procnum == NFSPROC_PATHCONF || ! procnum == NFSPROC_NOOP || ! procnum == NFSPROC_NULL) goto out; ! error = nfs_realign(&req->rq_args, M_NOWAIT); if (error) goto out; md = req->rq_args; dpos = mtod(md, caddr_t); /* Grab the filehandle. */ ! error = nfsm_srvmtofh_xx(&fh.fh_generic, v3, &md, &dpos); if (error) goto out; ! bcopy(fh.fh_generic.fh_fid.fid_data, &i->fh, sizeof(i->fh)); /* Content ourselves with zero offset for all but reads. */ ! if (procnum != NFSPROC_READ) ! goto out; ! if (v3) { ! tl = nfsm_dissect_xx_nonblock(2 * NFSX_UNSIGNED, &md, &dpos); ! if (tl == NULL) ! goto out; ! i->offset = fxdr_hyper(tl); ! } else { ! tl = nfsm_dissect_xx_nonblock(NFSX_UNSIGNED, &md, &dpos); ! if (tl == NULL) ! goto out; ! i->offset = fxdr_unsigned(u_int32_t, *tl); ! } ! out: ! switch (procnum) { ! case NFSPROC_NULL: ! case NFSPROC_GETATTR: ! case NFSPROC_LOOKUP: ! case NFSPROC_ACCESS: ! case NFSPROC_READLINK: ! case NFSPROC_READ: ! case NFSPROC_READDIR: ! case NFSPROC_READDIRPLUS: ! i->locktype = LK_SHARED; ! break; ! case NFSPROC_SETATTR: ! case NFSPROC_WRITE: ! case NFSPROC_CREATE: ! case NFSPROC_MKDIR: ! case NFSPROC_SYMLINK: ! case NFSPROC_MKNOD: ! case NFSPROC_REMOVE: ! case NFSPROC_RMDIR: ! case NFSPROC_RENAME: ! case NFSPROC_LINK: ! case NFSPROC_FSSTAT: ! case NFSPROC_FSINFO: ! case NFSPROC_PATHCONF: ! case NFSPROC_COMMIT: ! case NFSPROC_NOOP: ! i->locktype = LK_EXCLUSIVE; ! break; ! } } static struct fha_hash_entry * --- 167,194 ---- * only do this for reads today, but this may change when IFS supports * efficient concurrent writes. */ ! if (cb->no_offset(procnum)) goto out; ! error = cb->realign(&req->rq_args, M_NOWAIT); if (error) goto out; md = req->rq_args; dpos = mtod(md, caddr_t); /* Grab the filehandle. */ ! error = cb->get_fh(&fh, v3, &md, &dpos); if (error) goto out; ! bcopy(fh.fh_fid.fid_data, &i->fh, sizeof(i->fh)); /* Content ourselves with zero offset for all but reads. */ ! if (cb->is_read(procnum) || cb->is_write(procnum)) ! cb->get_offset(&md, &dpos, v3, i); ! out: ! cb->set_locktype(procnum, i); } static struct fha_hash_entry * *************** *** 269,276 **** e = malloc(sizeof(*e), M_NFS_FHA, M_WAITOK); e->fh = fh; ! e->num_reads = 0; ! e->num_writes = 0; e->num_threads = 0; LIST_INIT(&e->threads); --- 198,205 ---- e = malloc(sizeof(*e), M_NFS_FHA, M_WAITOK); e->fh = fh; ! e->num_rw = 0; ! e->num_exclusive = 0; e->num_threads = 0; LIST_INIT(&e->threads); *************** *** 281,287 **** fha_hash_entry_destroy(struct fha_hash_entry *e) { ! if (e->num_reads + e->num_writes) panic("nonempty fhe"); free(e, M_NFS_FHA); } --- 210,216 ---- fha_hash_entry_destroy(struct fha_hash_entry *e) { ! if (e->num_rw + e->num_exclusive) panic("nonempty fhe"); free(e, M_NFS_FHA); } *************** *** 295,305 **** } static struct fha_hash_entry * ! fha_hash_entry_lookup(SVCPOOL *pool, u_int64_t fh) { struct fha_hash_entry *fhe, *new_fhe; ! LIST_FOREACH(fhe, &g_fha.hashtable[fh % g_fha.hashmask], link) if (fhe->fh == fh) break; --- 224,239 ---- } static struct fha_hash_entry * ! fha_hash_entry_lookup(struct fha_params *softc, u_int64_t fh) { + SVCPOOL *pool; + + pool = *softc->pool; + struct fha_hash_entry *fhe, *new_fhe; ! LIST_FOREACH(fhe, &softc->g_fha.hashtable[fh % softc->g_fha.hashmask], ! link) if (fhe->fh == fh) break; *************** *** 310,321 **** mtx_lock(&pool->sp_lock); /* Double-check to make sure we still need the new entry. */ ! LIST_FOREACH(fhe, &g_fha.hashtable[fh % g_fha.hashmask], link) if (fhe->fh == fh) break; if (!fhe) { fhe = new_fhe; ! LIST_INSERT_HEAD(&g_fha.hashtable[fh % g_fha.hashmask], fhe, link); } else fha_hash_entry_destroy(new_fhe); --- 244,257 ---- mtx_lock(&pool->sp_lock); /* Double-check to make sure we still need the new entry. */ ! LIST_FOREACH(fhe, ! &softc->g_fha.hashtable[fh % softc->g_fha.hashmask], link) if (fhe->fh == fh) break; if (!fhe) { fhe = new_fhe; ! LIST_INSERT_HEAD( ! &softc->g_fha.hashtable[fh % softc->g_fha.hashmask], fhe, link); } else fha_hash_entry_destroy(new_fhe); *************** *** 348,356 **** { if (LK_EXCLUSIVE == locktype) ! fhe->num_writes += count; else ! fhe->num_reads += count; } static SVCTHREAD * --- 284,292 ---- { if (LK_EXCLUSIVE == locktype) ! fhe->num_exclusive += count; else ! fhe->num_rw += count; } static SVCTHREAD * *************** *** 371,392 **** * appropriate to handle this operation. */ SVCTHREAD * ! fha_hash_entry_choose_thread(SVCPOOL *pool, struct fha_hash_entry *fhe, ! struct fha_info *i, SVCTHREAD *this_thread); SVCTHREAD * ! fha_hash_entry_choose_thread(SVCPOOL *pool, struct fha_hash_entry *fhe, ! struct fha_info *i, SVCTHREAD *this_thread) { SVCTHREAD *thread, *min_thread = NULL; int req_count, min_count = 0; off_t offset1, offset2; LIST_FOREACH(thread, &fhe->threads, st_alink) { req_count = thread->st_reqcount; /* If there are any writes in progress, use the first thread. */ ! if (fhe->num_writes) { #if 0 ITRACE_CURPROC(ITRACE_NFS, ITRACE_INFO, "fha: %p(%d)w", thread, req_count); --- 307,331 ---- * appropriate to handle this operation. */ SVCTHREAD * ! fha_hash_entry_choose_thread(struct fha_params *softc, ! struct fha_hash_entry *fhe, struct fha_info *i, SVCTHREAD *this_thread); SVCTHREAD * ! fha_hash_entry_choose_thread(struct fha_params *softc, ! struct fha_hash_entry *fhe, struct fha_info *i, SVCTHREAD *this_thread) { SVCTHREAD *thread, *min_thread = NULL; + SVCPOOL *pool; int req_count, min_count = 0; off_t offset1, offset2; + pool = *softc->pool; + LIST_FOREACH(thread, &fhe->threads, st_alink) { req_count = thread->st_reqcount; /* If there are any writes in progress, use the first thread. */ ! if (fhe->num_exclusive) { #if 0 ITRACE_CURPROC(ITRACE_NFS, ITRACE_INFO, "fha: %p(%d)w", thread, req_count); *************** *** 398,409 **** * Check for read locality, making sure that we won't * exceed our per-thread load limit in the process. */ ! offset1 = i->offset >> fha_ctls.bin_shift; ! offset2 = STAILQ_FIRST(&thread->st_reqs)->rq_p3 ! >> fha_ctls.bin_shift; ! if (offset1 == offset2) { ! if ((fha_ctls.max_reqs_per_nfsd == 0) || ! (req_count < fha_ctls.max_reqs_per_nfsd)) { #if 0 ITRACE_CURPROC(ITRACE_NFS, ITRACE_INFO, "fha: %p(%d)r", thread, req_count); --- 337,351 ---- * Check for read locality, making sure that we won't * exceed our per-thread load limit in the process. */ ! offset1 = i->offset; ! offset2 = STAILQ_FIRST(&thread->st_reqs)->rq_p3; ! ! if (((offset1 >= offset2) ! && ((offset1 - offset2) < (1 << softc->ctls.bin_shift))) ! || ((offset2 > offset1) ! && ((offset2 - offset1) < (1 << softc->ctls.bin_shift)))) { ! if ((softc->ctls.max_reqs_per_nfsd == 0) || ! (req_count < softc->ctls.max_reqs_per_nfsd)) { #if 0 ITRACE_CURPROC(ITRACE_NFS, ITRACE_INFO, "fha: %p(%d)r", thread, req_count); *************** *** 432,439 **** * We didn't find a good match yet. See if we can add * a new thread to this file handle entry's thread list. */ ! if ((fha_ctls.max_nfsds_per_fh == 0) || ! (fhe->num_threads < fha_ctls.max_nfsds_per_fh)) { /* * We can add a new thread, so try for an idle thread * first, and fall back to this_thread if none are idle. --- 374,381 ---- * We didn't find a good match yet. See if we can add * a new thread to this file handle entry's thread list. */ ! if ((softc->ctls.max_nfsds_per_fh == 0) || ! (fhe->num_threads < softc->ctls.max_nfsds_per_fh)) { /* * We can add a new thread, so try for an idle thread * first, and fall back to this_thread if none are idle. *************** *** 473,485 **** * handle it ourselves. */ SVCTHREAD * ! fha_assign(SVCTHREAD *this_thread, struct svc_req *req) { SVCPOOL *pool; SVCTHREAD *thread; struct fha_info i; struct fha_hash_entry *fhe; /* * Only do placement if this is an NFS request. */ --- 415,435 ---- * handle it ourselves. */ SVCTHREAD * ! fha_assign(SVCTHREAD *this_thread, struct svc_req *req, ! struct fha_params *softc) { SVCPOOL *pool; SVCTHREAD *thread; struct fha_info i; struct fha_hash_entry *fhe; + struct fha_callbacks *cb; + + cb = &softc->callbacks; + /* Check to see whether we're enabled. */ + if (softc->ctls.enable == 0) + return (this_thread); + /* * Only do placement if this is an NFS request. */ *************** *** 490,502 **** return (this_thread); pool = req->rq_xprt->xp_pool; ! fha_extract_info(req, &i); /* * We save the offset associated with this request for later * nfsd matching. */ ! fhe = fha_hash_entry_lookup(pool, i.fh); req->rq_p1 = fhe; req->rq_p2 = i.locktype; req->rq_p3 = i.offset; --- 440,452 ---- return (this_thread); pool = req->rq_xprt->xp_pool; ! fha_extract_info(req, &i, cb); /* * We save the offset associated with this request for later * nfsd matching. */ ! fhe = fha_hash_entry_lookup(softc, i.fh); req->rq_p1 = fhe; req->rq_p2 = i.locktype; req->rq_p3 = i.offset; *************** *** 505,511 **** * Choose a thread, taking into consideration locality, thread load, * and the number of threads already working on this file. */ ! thread = fha_hash_entry_choose_thread(pool, fhe, &i, this_thread); KASSERT(thread, ("fha_assign: NULL thread!")); fha_hash_entry_add_op(fhe, i.locktype, 1); --- 455,461 ---- * Choose a thread, taking into consideration locality, thread load, * and the number of threads already working on this file. */ ! thread = fha_hash_entry_choose_thread(softc, fhe, &i, this_thread); KASSERT(thread, ("fha_assign: NULL thread!")); fha_hash_entry_add_op(fhe, i.locktype, 1); *************** *** 532,564 **** if (thread->st_reqcount == 0) { fha_hash_entry_remove_thread(fhe, thread); ! if (0 == fhe->num_reads + fhe->num_writes) fha_hash_entry_remove(fhe); } } ! extern SVCPOOL *nfsrv_pool; ! ! static int ! fhe_stats_sysctl(SYSCTL_HANDLER_ARGS) { int error, count, i; struct sbuf sb; struct fha_hash_entry *fhe; bool_t first = TRUE; SVCTHREAD *thread; sbuf_new(&sb, NULL, 4096, SBUF_FIXEDLEN); ! if (!nfsrv_pool) { sbuf_printf(&sb, "NFSD not running\n"); goto out; } ! mtx_lock(&nfsrv_pool->sp_lock); count = 0; ! for (i = 0; i <= g_fha.hashmask; i++) ! if (!LIST_EMPTY(&g_fha.hashtable[i])) count++; if (count == 0) { --- 482,516 ---- if (thread->st_reqcount == 0) { fha_hash_entry_remove_thread(fhe, thread); ! if (0 == fhe->num_rw + fhe->num_exclusive) fha_hash_entry_remove(fhe); } } ! int ! fhe_stats_sysctl(SYSCTL_HANDLER_ARGS, struct fha_params *softc) { int error, count, i; struct sbuf sb; struct fha_hash_entry *fhe; bool_t first = TRUE; SVCTHREAD *thread; + SVCPOOL *pool; sbuf_new(&sb, NULL, 4096, SBUF_FIXEDLEN); ! pool = NULL; ! ! if (!*softc->pool) { sbuf_printf(&sb, "NFSD not running\n"); goto out; } + pool = *softc->pool; ! mtx_lock(&pool->sp_lock); count = 0; ! for (i = 0; i <= softc->g_fha.hashmask; i++) ! if (!LIST_EMPTY(&softc->g_fha.hashtable[i])) count++; if (count == 0) { *************** *** 566,583 **** goto out; } ! for (i = 0; i <= g_fha.hashmask; i++) { ! LIST_FOREACH(fhe, &g_fha.hashtable[i], link) { sbuf_printf(&sb, "%sfhe %p: {\n", first ? "" : ", ", fhe); sbuf_printf(&sb, " fh: %ju\n", (uintmax_t) fhe->fh); ! sbuf_printf(&sb, " num_reads: %d\n", fhe->num_reads); ! sbuf_printf(&sb, " num_writes: %d\n", fhe->num_writes); sbuf_printf(&sb, " num_threads: %d\n", fhe->num_threads); LIST_FOREACH(thread, &fhe->threads, st_alink) { ! sbuf_printf(&sb, " thread %p (count %d)\n", ! thread, thread->st_reqcount); } sbuf_printf(&sb, "}"); --- 518,537 ---- goto out; } ! for (i = 0; i <= softc->g_fha.hashmask; i++) { ! LIST_FOREACH(fhe, &softc->g_fha.hashtable[i], link) { sbuf_printf(&sb, "%sfhe %p: {\n", first ? "" : ", ", fhe); sbuf_printf(&sb, " fh: %ju\n", (uintmax_t) fhe->fh); ! sbuf_printf(&sb, " num_rw: %d\n", fhe->num_rw); ! sbuf_printf(&sb, " num_exclusive: %d\n", fhe->num_exclusive); sbuf_printf(&sb, " num_threads: %d\n", fhe->num_threads); LIST_FOREACH(thread, &fhe->threads, st_alink) { ! sbuf_printf(&sb, " thread %p offset %ju " ! "(count %d)\n", thread, ! STAILQ_FIRST(&thread->st_reqs)->rq_p3, ! thread->st_reqcount); } sbuf_printf(&sb, "}"); *************** *** 592,599 **** } out: ! if (nfsrv_pool) ! mtx_unlock(&nfsrv_pool->sp_lock); sbuf_trim(&sb); sbuf_finish(&sb); error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req); --- 546,553 ---- } out: ! if (pool) ! mtx_unlock(&pool->sp_lock); sbuf_trim(&sb); sbuf_finish(&sb); error = sysctl_handle_string(oidp, sbuf_data(&sb), sbuf_len(&sb), req); *** src/sys/nfsserver/nfs_fha.h.orig --- src/sys/nfsserver/nfs_fha.h *************** *** 24,28 **** */ /* $FreeBSD: head/sys/nfsserver/nfs_fha.h 184588 2008-11-03 10:38:00Z dfr $ */ void fha_nd_complete(SVCTHREAD *, struct svc_req *); ! SVCTHREAD *fha_assign(SVCTHREAD *, struct svc_req *); --- 24,112 ---- */ /* $FreeBSD: head/sys/nfsserver/nfs_fha.h 184588 2008-11-03 10:38:00Z dfr $ */ + #ifndef _NFS_FHA_H + #define _NFS_FHA_H 1 + + #ifdef _KERNEL + + /* Sysctl defaults. */ + #define FHA_DEF_ENABLE 1 + #define FHA_DEF_BIN_SHIFT 22 /* 4MB */ + #define FHA_DEF_MAX_NFSDS_PER_FH 8 + #define FHA_DEF_MAX_REQS_PER_NFSD 0 /* Unlimited */ + + /* This is the global structure that represents the state of the fha system. */ + struct fha_global { + struct fha_hash_entry_list *hashtable; + u_long hashmask; + }; + + struct fha_ctls { + int enable; + uint32_t bin_shift; + uint32_t max_nfsds_per_fh; + uint32_t max_reqs_per_nfsd; + }; + + /* + * These are the entries in the filehandle hash. They talk about a specific + * file, requests against which are being handled by one or more nfsds. We + * keep a chain of nfsds against the file. We only have more than one if reads + * are ongoing, and then only if the reads affect disparate regions of the + * file. + * + * In general, we want to assign a new request to an existing nfsd if it is + * going to contend with work happening already on that nfsd, or if the + * operation is a read and the nfsd is already handling a proximate read. We + * do this to avoid jumping around in the read stream unnecessarily, and to + * avoid contention between threads over single files. + */ + struct fha_hash_entry { + LIST_ENTRY(fha_hash_entry) link; + u_int64_t fh; + u_int32_t num_rw; + u_int32_t num_exclusive; + u_int8_t num_threads; + struct svcthread_list threads; + }; + + LIST_HEAD(fha_hash_entry_list, fha_hash_entry); + + /* A structure used for passing around data internally. */ + struct fha_info { + u_int64_t fh; + off_t offset; + int locktype; + }; + + struct fha_callbacks { + rpcproc_t (*get_procnum)(rpcproc_t procnum); + int (*realign)(struct mbuf **mb, int malloc_flags); + int (*get_fh)(fhandle_t *fh, int v3, struct mbuf **md, caddr_t *dpos); + int (*is_read)(rpcproc_t procnum); + int (*is_write)(rpcproc_t procnum); + int (*get_offset)(struct mbuf **md, caddr_t *dpos, int v3, struct + fha_info *info); + int (*no_offset)(rpcproc_t procnum); + void (*set_locktype)(rpcproc_t procnum, struct fha_info *info); + int (*fhe_stats_sysctl)(SYSCTL_HANDLER_ARGS); + }; + + struct fha_params { + struct fha_global g_fha; + struct sysctl_ctx_list sysctl_ctx; + struct sysctl_oid *sysctl_tree; + struct fha_ctls ctls; + struct fha_callbacks callbacks; + char server_name[32]; + SVCPOOL **pool; + }; + void fha_nd_complete(SVCTHREAD *, struct svc_req *); ! SVCTHREAD *fha_assign(SVCTHREAD *, struct svc_req *, struct fha_params *); ! void fha_init(struct fha_params *softc); ! void fha_uninit(struct fha_params *softc); ! int fhe_stats_sysctl(SYSCTL_HANDLER_ARGS, struct fha_params *softc); ! ! #endif /* _KERNEL */ ! #endif /* _NFS_FHA_H_ */ *** /dev/null Fri Mar 29 13:26:00 2013 --- src/sys/nfsserver/nfs_fha_old.c Fri Mar 29 13:26:45 2013 *************** *** 0 **** --- 1,241 ---- + /*- + * Copyright (c) 2008 Isilon Inc http://www.isilon.com/ + * Copyright (c) 2013 Spectra Logic Corporation + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + + #include + __FBSDID("$FreeBSD$"); + + #include + #include + #include + #include + #include + #include + #include + #include + #include + + #include + #include + #include + #include + #include + #include + #include + + static void fhaold_init(void *foo); + static void fhaold_uninit(void *foo); + rpcproc_t fhaold_get_procnum(rpcproc_t procnum); + int fhaold_realign(struct mbuf **mb, int malloc_flags); + int fhaold_get_fh(fhandle_t *fh, int v3, struct mbuf **md, caddr_t *dpos); + int fhaold_is_read(rpcproc_t procnum); + int fhaold_is_write(rpcproc_t procnum); + int fhaold_get_offset(struct mbuf **md, caddr_t *dpos, int v3, + struct fha_info *info); + int fhaold_no_offset(rpcproc_t procnum); + void fhaold_set_locktype(rpcproc_t procnum, struct fha_info *info); + static int fheold_stats_sysctl(SYSCTL_HANDLER_ARGS); + + static struct fha_params fhaold_softc; + + SYSCTL_DECL(_vfs_nfsrv); + + extern SVCPOOL *nfsrv_pool; + + SYSINIT(nfs_fhaold, SI_SUB_ROOT_CONF, SI_ORDER_ANY, fhaold_init, NULL); + SYSUNINIT(nfs_fhaold, SI_SUB_ROOT_CONF, SI_ORDER_ANY, fhaold_uninit, NULL); + + static void + fhaold_init(void *foo) + { + struct fha_params *softc; + + softc = &fhaold_softc; + + bzero(softc, sizeof(*softc)); + + /* + * Setup the callbacks for this FHA personality. + */ + softc->callbacks.get_procnum = fhaold_get_procnum; + softc->callbacks.realign = fhaold_realign; + softc->callbacks.get_fh = fhaold_get_fh; + softc->callbacks.is_read = fhaold_is_read; + softc->callbacks.is_write = fhaold_is_write; + softc->callbacks.get_offset = fhaold_get_offset; + softc->callbacks.no_offset = fhaold_no_offset; + softc->callbacks.set_locktype = fhaold_set_locktype; + softc->callbacks.fhe_stats_sysctl = fheold_stats_sysctl; + + snprintf(softc->server_name, sizeof(softc->server_name), + FHAOLD_SERVER_NAME); + + softc->pool = &nfsrv_pool; + + /* + * Initialize the sysctl context list for the fha module. + */ + sysctl_ctx_init(&softc->sysctl_ctx); + softc->sysctl_tree = SYSCTL_ADD_NODE(&softc->sysctl_ctx, + SYSCTL_STATIC_CHILDREN(_vfs_nfsrv), OID_AUTO, "fha", CTLFLAG_RD, + 0, "fha node"); + if (softc->sysctl_tree == NULL) { + printf("%s: unable to allocate sysctl tree\n", __func__); + return; + } + fha_init(softc); + } + + static void + fhaold_uninit(void *foo) + { + struct fha_params *softc; + + softc = &fhaold_softc; + + fha_uninit(softc); + } + + + rpcproc_t + fhaold_get_procnum(rpcproc_t procnum) + { + if (procnum > NFSV2PROC_STATFS) + return (-1); + + return (nfsrv_nfsv3_procid[procnum]); + } + + int + fhaold_realign(struct mbuf **mb, int malloc_flags) + { + return (nfs_realign(mb, malloc_flags)); + } + + int + fhaold_get_fh(fhandle_t *fh, int v3, struct mbuf **md, caddr_t *dpos) + { + return (nfsm_srvmtofh_xx(fh, v3, md, dpos)); + } + + int + fhaold_is_read(rpcproc_t procnum) + { + if (procnum == NFSPROC_READ) + return (1); + else + return (0); + } + + int + fhaold_is_write(rpcproc_t procnum) + { + if (procnum == NFSPROC_WRITE) + return (1); + else + return (0); + } + + int + fhaold_get_offset(struct mbuf **md, caddr_t *dpos, int v3, + struct fha_info *info) + { + uint32_t *tl; + + if (v3) { + tl = nfsm_dissect_xx_nonblock(2 * NFSX_UNSIGNED, md, dpos); + if (tl == NULL) + goto out; + info->offset = fxdr_hyper(tl); + } else { + tl = nfsm_dissect_xx_nonblock(NFSX_UNSIGNED, md, dpos); + if (tl == NULL) + goto out; + info->offset = fxdr_unsigned(uint32_t, *tl); + } + + return (0); + out: + return (-1); + } + + int + fhaold_no_offset(rpcproc_t procnum) + { + if (procnum == NFSPROC_FSSTAT || + procnum == NFSPROC_FSINFO || + procnum == NFSPROC_PATHCONF || + procnum == NFSPROC_NOOP || + procnum == NFSPROC_NULL) + return (1); + else + return (0); + } + + void + fhaold_set_locktype(rpcproc_t procnum, struct fha_info *info) + { + switch (procnum) { + case NFSPROC_NULL: + case NFSPROC_GETATTR: + case NFSPROC_LOOKUP: + case NFSPROC_ACCESS: + case NFSPROC_READLINK: + case NFSPROC_READ: + case NFSPROC_READDIR: + case NFSPROC_READDIRPLUS: + case NFSPROC_WRITE: + info->locktype = LK_SHARED; + break; + case NFSPROC_SETATTR: + case NFSPROC_CREATE: + case NFSPROC_MKDIR: + case NFSPROC_SYMLINK: + case NFSPROC_MKNOD: + case NFSPROC_REMOVE: + case NFSPROC_RMDIR: + case NFSPROC_RENAME: + case NFSPROC_LINK: + case NFSPROC_FSSTAT: + case NFSPROC_FSINFO: + case NFSPROC_PATHCONF: + case NFSPROC_COMMIT: + case NFSPROC_NOOP: + info->locktype = LK_EXCLUSIVE; + break; + } + } + + static int + fheold_stats_sysctl(SYSCTL_HANDLER_ARGS) + { + return (fhe_stats_sysctl(oidp, arg1, arg2, req, &fhaold_softc)); + } + + SVCTHREAD * + fhaold_assign(SVCTHREAD *this_thread, struct svc_req *req) + { + return (fha_assign(this_thread, req, &fhaold_softc)); + } ==== - //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha_old.c#2 ==== *** /dev/null Fri Mar 29 13:26:00 2013 --- src/sys/nfsserver/nfs_fha_old.h Fri Mar 29 13:26:45 2013 *************** *** 0 **** --- 1,38 ---- + /*- + * Copyright (c) 2008 Isilon Inc http://www.isilon.com/ + * Copyright (c) 2013 Spectra Logic Corporation + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + /* $FreeBSD$ */ + + #ifndef _NFS_FHA_OLD_H + #define _NFS_FHA_OLD_H 1 + + #ifdef _KERNEL + + #define FHAOLD_SERVER_NAME "nfsrv" + + SVCTHREAD *fhaold_assign(SVCTHREAD *this_thread, struct svc_req *req); + #endif /* _KERNEL */ + + #endif /* _NFS_FHA_OLD_H */ ==== - //depot/users/kenm/FreeBSD-test5/sys/nfsserver/nfs_fha_old.h#1 ==== *** src/sys/nfsserver/nfs_srvkrpc.c.orig --- src/sys/nfsserver/nfs_srvkrpc.c *************** *** 81,86 **** --- 81,87 ---- #include #include #include + #include #include *************** *** 532,538 **** nfsrv_pool = svcpool_create("nfsd", SYSCTL_STATIC_CHILDREN(_vfs_nfsrv)); nfsrv_pool->sp_rcache = replay_newcache(nfsrv_replay_size()); ! nfsrv_pool->sp_assign = fha_assign; nfsrv_pool->sp_done = fha_nd_complete; nfsrv_nmbclusters_tag = EVENTHANDLER_REGISTER(nmbclusters_change, nfsrv_nmbclusters_change, NULL, EVENTHANDLER_PRI_FIRST); --- 533,539 ---- nfsrv_pool = svcpool_create("nfsd", SYSCTL_STATIC_CHILDREN(_vfs_nfsrv)); nfsrv_pool->sp_rcache = replay_newcache(nfsrv_replay_size()); ! nfsrv_pool->sp_assign = fhaold_assign; nfsrv_pool->sp_done = fha_nd_complete; nfsrv_nmbclusters_tag = EVENTHANDLER_REGISTER(nmbclusters_change, nfsrv_nmbclusters_change, NULL, EVENTHANDLER_PRI_FIRST); --LQksG6bCIzRHxTLp-- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 22:21:16 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4E10F135 for ; Thu, 4 Apr 2013 22:21:16 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 1908BF0 for ; Thu, 4 Apr 2013 22:21:15 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqIEAD38XVGDaFvO/2dsb2JhbABDDgiDJoMovWGBG3SCHwEBBAEjVgUWGAICDRkCWQaIIQYMrxeSSIEjjEGBAzQHgi2BEwOWboEgj22CTFsggS89 X-IronPort-AV: E=Sophos;i="4.87,411,1363147200"; d="scan'208";a="22509868" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 04 Apr 2013 18:21:14 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 5EE03B4041; Thu, 4 Apr 2013 18:21:14 -0400 (EDT) Date: Thu, 4 Apr 2013 18:21:14 -0400 (EDT) From: Rick Macklem To: Graham Allan Message-ID: <338176325.529222.1365114074330.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <515D0287.2060704@physics.umn.edu> Subject: Re: zfs home directories best practice MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE8 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 22:21:16 -0000 Graham Allan wrote: > On 4/3/2013 6:56 PM, Rick Macklem wrote: > >> > > Well, there isn't any limit to the # of exported file systems afaik, > > but updating a large /etc/exports file takes quite a bit of time and > > when you use mountd (the default) for this, you can have problems. > > (You either have a period of time when no client can get response > > from the server or a period of time when I/O fails because the > > file system isn't re-exported yet.) > > > > If you choose this approach, you should look seriously at using > > nfse (on sourceforge) instead of mountd. > > That's an interesting-looking project though I'm beginning to think > that > unless there's some serious downside to the "one big filesystem", I > should just defer the per-user filesystems for the system after this > one. As you remind me below, I'll probably have other issues to chase > down besides that one (performance as well as making the jump to > NFSv4...) > > > You might also want to contact Garrett Wollman w.r.t. the NFS > > server patch(es) and setup he is using, since he has been > > working through performance issues (relatively successfully > > now, as I understand) for a fairly large NFS/ZFS server. > > You should be able to find a thread discussing this on > > freebsd-fs or freebsd-current. > > I found the thread "NFS server bottlenecks" on freebsd-hackers, which > has a lot of interesting reading, and then also "NFS DRC size" on > freebsd-fs. We might dig into some of that material (eg DRC-related > patches) though I probably need to spend more time on basics first > (kernel parameters, number of nfsd threads, etc). > Hopefully I can get to-gether with ivoras@ in May and come up with a patch for head/current. If you want to try something before then, this patch is roughly what Garrett is using: http://people.freebsd.org/~rmacklem/drc4.patch rick > Thanks for the pointers, > > Graham > -- From owner-freebsd-fs@FreeBSD.ORG Thu Apr 4 23:02:34 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9105D6B0 for ; Thu, 4 Apr 2013 23:02:34 +0000 (UTC) (envelope-from list_freebsd@bluerosetech.com) Received: from yoshi.bluerosetech.com (yoshi.bluerosetech.com [IPv6:2607:f2f8:a450::66]) by mx1.freebsd.org (Postfix) with ESMTP id 7DDA321C for ; Thu, 4 Apr 2013 23:02:34 +0000 (UTC) Received: from chombo.houseloki.net (montesse-2-pt.tunnel.tserv3.fmt2.ipv6.he.net [IPv6:2001:470:1f04:19b9::2]) by yoshi.bluerosetech.com (Postfix) with ESMTPSA id 3F596E629E; Thu, 4 Apr 2013 16:02:33 -0700 (PDT) Received: from [10.26.25.70] (c-50-137-47-22.hsd1.or.comcast.net [50.137.47.22]) by chombo.houseloki.net (Postfix) with ESMTPSA id 587AB829; Thu, 4 Apr 2013 16:02:32 -0700 (PDT) Message-ID: <515E0696.2020901@bluerosetech.com> Date: Thu, 04 Apr 2013 16:02:46 -0700 From: Darren Pilgrim User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130307 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jay Borkenhagen Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> <515DB070.1090803@bluerosetech.com> <20829.52349.314652.424391@oz.mt.att.com> In-Reply-To: <20829.52349.314652.424391@oz.mt.att.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 23:02:34 -0000 On 4/4/2013 11:54 AM, Jay Borkenhagen wrote: > Darren Pilgrim writes: > > Reboot using the install disk, go to the Live CD. Import the pool using > > an altroot and a cachefile: > > > > zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot > > > > Copy /tmp/zpool.cache to /mnt/ROOT/boot/zfs/zpool.cache, then reboot > > *without* exporting the pool. > > Thanks, Darren. > > However, when I try that I get this: > > --------------- > root@:/root # zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot > failed to open cache file: No such file or directory > cannot import 'zroot': no such pool available > root@:/root # > --------------- Sorry, wrong cachefile option, try: zpool import -o cachefile=/tmp/zpool.cache -o altroot=/mnt zroot From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 10:17:29 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 94DDE54C for ; Fri, 5 Apr 2013 10:17:29 +0000 (UTC) (envelope-from joar.jegleim@gmail.com) Received: from mail-wg0-x229.google.com (mail-wg0-x229.google.com [IPv6:2a00:1450:400c:c00::229]) by mx1.freebsd.org (Postfix) with ESMTP id 1CE8CD27 for ; Fri, 5 Apr 2013 10:17:28 +0000 (UTC) Received: by mail-wg0-f41.google.com with SMTP id y10so1579674wgg.4 for ; Fri, 05 Apr 2013 03:17:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=FIaHdCygKnUxnqlZp/Lim7gIHyA/vKXHlyZM5AJq8T4=; b=DAnc0828ViI2EgghY/wtGxZ08NNKynOyCBxVLRPzMK7Yo9GFwtf3TFWn3fZhHNcxWp NCtb/hKw431rJWtN5Foxjtawy7lipzKThRsMsgLEj1v8gDFnxZk4eGAwJgFx1LwtuRXM Nmdz7y1WRiAX2s6+IuxBKvYnR9Hkps4n2nhxcSf6+cw2SUVlQh9LCVfhcHniniME966M HvTfN4qGzyXordxZmQTbHNvOz65ADSzcQv3Z1rPIWUuZyMU3ftwXrldOtBuD5KCsMfz6 8Mg36fJIIL8avrdIawkBOdFAHIQH9QNLG1D76aaHaM6YKJJREDy2pkTA1T2NVXyMxJJg ODOQ== MIME-Version: 1.0 X-Received: by 10.194.82.104 with SMTP id h8mr15367296wjy.3.1365157047452; Fri, 05 Apr 2013 03:17:27 -0700 (PDT) Received: by 10.216.34.9 with HTTP; Fri, 5 Apr 2013 03:17:27 -0700 (PDT) Date: Fri, 5 Apr 2013 12:17:27 +0200 Message-ID: Subject: Regarding regular zfs From: Joar Jegleim To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 10:17:29 -0000 Hi FreeBSD ! I've already sent this one to questions@freebsd.org, but realised this list would be a better option. So I've got this setup where we have a storage server delivering about 2 million jpeg's as a backend for a website ( it's ~1TB of data) The storage server is running zfs and every 15 minutes it does a zfs send to a 'slave', and our proxy will fail over to the slave if the main storage server goes down . I've got this script that initially zfs send's a whole zfs volume, and for every send after that only sends the diff . So after the initial zfs send, the diff's usually take less than a minute to send over. I've had increasing problems on the 'slave', it seem to grind to a halt for anything between 5-20 seconds after every zfs receive . Everything on the server halts / hangs completely. I've had a couple go's on trying to solve / figure out what's happening without luck, and this 3rd time I've invested even more time on the problem . To sum it up: -Server was initially on 8.2-RELEASE -I've set some sysctl variables such as: # 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' situations, suspect zfs.arc ate too much memory) vfs.zfs.arc_max=17179869184 # 8.2 default to 30 here, setting it to 5 which is default from 8.3 and onwards vfs.zfs.txg.timeout="5" # Set TXG write limit to a lower threshold. This helps "level out" # the throughput rate (see "zpool iostat"). A value of 256MB works well # for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on # disks which have 64 MB cache. <
> # NOTE: in Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4446E850 for ; Fri, 5 Apr 2013 10:24:30 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: from relay.ibs.dn.ua (relay.ibs.dn.ua [91.216.196.25]) by mx1.freebsd.org (Postfix) with ESMTP id CC97AD8A for ; Fri, 5 Apr 2013 10:24:29 +0000 (UTC) Received: from ibs.dn.ua (relay.ibs.dn.ua [91.216.196.25]) by relay.ibs.dn.ua with ESMTP id r35AJsrs060185 for ; Fri, 5 Apr 2013 13:19:54 +0300 (EEST) Message-ID: <20130405131954.60183@relay.ibs.dn.ua> Date: Fri, 05 Apr 2013 13:19:54 +0300 From: Zeus Panchenko To: Subject: Error ZFS-8000-8A on 9.1 under VMware Organization: I.B.S. LLC X-Mailer: MH-E 8.3.1; GNU Mailutils 2.99.98; GNU Emacs 24.0.93 X-Face: &sReWXo3Iwtqql1[My(t1Gkx; y?KF@KF`4X+'9Cs@PtK^y%}^.>Mtbpyz6U=,Op:KPOT.uG )Nvx`=er!l?WASh7KeaGhga"1[&yz$_7ir'cVp7o%CGbJ/V)j/=]vzvvcqcZkf; JDurQG6wTg+?/xA go`}1.Ze//K; Fk&/&OoHd'[b7iGt2UO>o(YskCT[_D)kh4!yY'<&:yt+zM=A`@`~9U+P[qS:f; #9z~ Or/Bo#N-'S'!'[3Wog'ADkyMqmGDvga?WW)qd=?)`Y&k=o}>!ST\ MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Zeus Panchenko List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 10:24:30 -0000 =2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, I face weird situation ... may somebody advice, please? Under VMware ESX 4.1.2 two FreeBSD-9.1-amd64-on-ZFS boxes were set. on both of them open-vm-tools are installed after some time of correct functionality, I'm getting error ZFS-8000-8A on both of the boxes ... after corrupted files removal and `zpool clean' no errors reported by `zpool scrub' than I was trying to svn stable and re-build/install world to be sure all OS stuff is not corrupted, but while buildworld, build/installkernel went well, installworld fails with different for both boxes errors=20 "install: ...: Input/output error" and in dmesg output is nothing related to the issue the same was happening when I was installing FreeBSD on UFS before ... is there some special configuration issues for FreeBSD on ZFS under VM? =2D ---[ box1 ]----------------------------------------------------------- box1 > zpool status -v pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 2.50K in 0h4m with 6 errors on Tue Mar 26 20:40:26 2= 013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 6 gpt/disk0 ONLINE 0 0 13 errors: Permanent errors have been detected in the following files: /usr/obj/usr/src/tmp/legacy/usr/bin/indxbib /usr/obj/usr/src/sys/HELP/modules/usr/src/sys/modules/drm/radeon/ra= deon.ko.debug /usr/obj/usr/src/kerberos5/lib/libasn1/.depend /usr/obj/lib32/usr/src/lib/libsmb/libsmb_p.a /usr/local/lib/libsicudata.a /usr/src/.svn/pristine/0f/0fc73fd8e2874afad8bdbf2d921dac0ef73ca859.= svn-base box1 > uname -a FreeBSD box1.foo 9.1-RC3 FreeBSD 9.1-RC3 #0 r242324: GENERIC amd64 =2D ---[ box2 ]----------------------------------------------------------- box2 > zpool status -v pool: sharenfs state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM sharenfs ONLINE 0 0 0 da1 ONLINE 0 0 0 errors: No known data errors pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 0h2m with 11 errors on Wed Apr 3 15:44:28 2013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 16 da0p3 ONLINE 0 0 27 errors: Permanent errors have been detected in the following files: /usr/obj/usr/src/lib/clang/libllvmx86codegen/X86FastISel.o /usr/obj/usr/src/sys/NETBOOT/geom_slice.o /usr/obj/usr/src/tmp/usr/src/gnu/usr.bin/cc/cc_int/tree.o /usr/ports/graphics/xfig-devel/pkg-plist /usr/obj/lib32/usr/src/gnu/lib/libstdc++/locale-inst.o /usr/local/share/emacs/24.1/etc/charsets/IBM905.map /usr/obj/usr/src/gnu/usr.bin/cc/cc_int/libbackend.a /usr/ports/INDEX-9.db /usr/src/sys/dev/drm2/drm_pciids.h /usr/home/wzooff/pxe/initrd /tftpboot/pxe.old/ubcd/ubcd.iso box2 > uname -a FreeBSD box2.foo 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0: BOX2 amd64 =2D --=20 Zeus V. Panchenko jid:zeus@im.ibs.dn.ua IT Dpt., I.B.S. LLC GMT+2 (EET) =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlFepUoACgkQr3jpPg/3oyo8fgCfXgjspk4FlscPMQ6H2uq7gEFf gm0AoL7VuVtkIe3bflGFobHto5lyJLI+ =3DYCCs =2D----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 11:00:45 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 9FAD157D for ; Fri, 5 Apr 2013 11:00:45 +0000 (UTC) (envelope-from db@nipsi.de) Received: from fop.bsdsystems.de (mx.bsdsystems.de [88.198.57.43]) by mx1.freebsd.org (Postfix) with ESMTP id 2D493EEA for ; Fri, 5 Apr 2013 11:00:44 +0000 (UTC) Received: from [192.168.24.177] (mail.gift-company.com [85.183.131.10]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by fop.bsdsystems.de (Postfix) with ESMTP id D5B5E51E06; Fri, 5 Apr 2013 13:00:42 +0200 (CEST) Subject: Re: ZFS in production enviroments Mime-Version: 1.0 (Apple Message framework v1085) From: dennis berger In-Reply-To: Date: Fri, 5 Apr 2013 13:00:42 +0200 Message-Id: <4BC15B7B-4893-4167-ACF0-1CB066DE4EE3@nipsi.de> References: To: Mark Felder X-Mailer: Apple Mail (2.1085) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 11:00:45 -0000 Thanks for the setup information. If you have time may you describe your head units a little bit? How do you configure istgt + zfs for iscsi volumes. On our system I = often see a lot of IOPS when I write to an exported zvol.=20 Maybe this is due to wrong blocksize config in istgt. I don't see those high IOPS in NFS exported volumes for example. Best, -dennis Am 04.04.2013 um 14:46 schrieb Mark Felder: > Our setup: >=20 > * FreeBSD 9-STABLE (from before 9.0-RELEASE) > * HP DL360 servers acting as "head units" > * LSI SAS 9201-16e controllers > * Intel NICs > * DataOn Storage DNS-1630 JBODs with dual controllers (LSI based) > * 2TB 7200RPM Hitachi SATA HDs with SAS interposers (LSISS9252) > * Intel SSDs for cache/log devices > * gmultipath is handling the active/active data paths to the drives. = ex: ZFS uses multipath/disk01 in the pool > * istgt serving iSCSI to Xen and ESXi from zvols >=20 > Built these just before the hard drive prices spiked from the floods. = I need to jam more RAM in there and it would be nice to be running = FreeBSD 10 with some of the newer ZFS code and having access to TRIM. = Uptime on these servers is over a year. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 11:08:15 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 5D735660 for ; Fri, 5 Apr 2013 11:08:15 +0000 (UTC) (envelope-from ml@my.gd) Received: from mail-wg0-f49.google.com (mail-wg0-f49.google.com [74.125.82.49]) by mx1.freebsd.org (Postfix) with ESMTP id EB714F25 for ; Fri, 5 Apr 2013 11:08:14 +0000 (UTC) Received: by mail-wg0-f49.google.com with SMTP id e11so3569416wgh.28 for ; Fri, 05 Apr 2013 04:08:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:references:mime-version:in-reply-to:content-type :content-transfer-encoding:message-id:cc:x-mailer:from:subject:date :to:x-gm-message-state; bh=yYrq//B5M4a/o/QH/paWTv2qIzcMza4aAOwK7voYw/M=; b=a24pMTUlyLB+hCzPOtNmKJvZMu8Ymj9oEPgdZY6eIhtb20wdqRYu9PLtuD5+N1XFeu W5VfqQIDDnswhazOCaVat5MeX7ayICOuz52aPBw86mesg6BieG61RjSbmyvOSBvrWz6f a7r7L1xyLjbWAxiXKMIwqpujKKEBXZsrNP99fzhmTMF+BpjFnHlWD+wh0luwsCg2AFkB YZcf4ZPtzNvZOR2L4iGpRXfRIZWaw3uCeTC5c9WKiUf3E/O0JgeICQhCwvzWt/MDKhkd v7punoD25jAZ5VFFsHVqx1Bg4NRRxRAwWFCzsCL+qNsds7jnq8U1ffWD/WLGC5TDez9Z 2flw== X-Received: by 10.180.103.40 with SMTP id ft8mr3358230wib.28.1365160093564; Fri, 05 Apr 2013 04:08:13 -0700 (PDT) Received: from [100.79.118.101] (22.26.90.92.rev.sfr.net. [92.90.26.22]) by mx.google.com with ESMTPS id du2sm3037095wib.0.2013.04.05.04.08.11 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 05 Apr 2013 04:08:12 -0700 (PDT) References: Mime-Version: 1.0 (1.0) In-Reply-To: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> X-Mailer: iPhone Mail (10B144) From: Damien Fleuriot Subject: Re: Regarding regular zfs Date: Fri, 5 Apr 2013 13:07:39 +0200 To: Joar Jegleim X-Gm-Message-State: ALoCoQnqcBcCFnKWpcoeKlA+guiu+z1Kd/t5IIr1ZMJbQI/b/zb+D8tHwHIc9edPb6uWOKPHgaoF Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 11:08:15 -0000 On 5 Apr 2013, at 12:17, Joar Jegleim wrote: > Hi FreeBSD ! > > I've already sent this one to questions@freebsd.org, but realised this list > would be a better option. > > So I've got this setup where we have a storage server delivering about > 2 million jpeg's as a backend for a website ( it's ~1TB of data) > The storage server is running zfs and every 15 minutes it does a zfs > send to a 'slave', and our proxy will fail over to the slave if the > main storage server goes down . > I've got this script that initially zfs send's a whole zfs volume, and > for every send after that only sends the diff . So after the initial zfs > send, the diff's usually take less than a minute to send over. > > I've had increasing problems on the 'slave', it seem to grind to a > halt for anything between 5-20 seconds after every zfs receive . Everything > on the server halts / hangs completely. > > I've had a couple go's on trying to solve / figure out what's > happening without luck, and this 3rd time I've invested even more time > on the problem . > > To sum it up: > -Server was initially on 8.2-RELEASE > -I've set some sysctl variables such as: > > # 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' > situations, suspect zfs.arc ate too much memory) > vfs.zfs.arc_max=17179869184 > > # 8.2 default to 30 here, setting it to 5 which is default from 8.3 and > onwards > vfs.zfs.txg.timeout="5" > > # Set TXG write limit to a lower threshold. This helps "level out" > # the throughput rate (see "zpool iostat"). A value of 256MB works well > # for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on > # disks which have 64 MB cache. <
> > # NOTE: in #vfs.zfs.txg.write_limit_override=1073741824 # for 8.2 > vfs.zfs.write_limit_override=1073741824 # for 8.3 and above > > -I've implemented mbuffer for the zfs send / receive operations. With > mbuffer the sync went a lot faster, but still got the same symptoms > when the zfs receive is done, the hang / unresponsiveness returns for > 5-20 seconds > -I've upgraded to 8.3-RELEASE ( + zpool upgrade and zfs upgrade to > V28), same symptoms > -I've upgraded to 9.1-RELEASE, still same symptoms > > The period where the server is unresponsive after a zfs receive, I > suspected it would correlate with the amount of data being sent, but > even if there is only a couple MB's data the hang / unresponsiveness > is still substantial . > > I suspect it may have something to do with the zfs volume being sent > is mount'ed on the slave, and I'm also doing the backups from the > slave, which means a lot of the time the backup server is rsyncing the > zfs volume being updated. > I've noticed that the unresponsiveness / hang situations occur while > the backupserver is rsync'ing from the zfs volume being updated, when > the backupserver is 'done' and nothing is working with files in the > zfs volume being updated i hardly notice any of the symptoms (mabye > just a minor lag for much less than a second, hardly noticeable) . > > So my question(s) to the list would be: > In my setup have I taken the use case for zfs send / receive too far > (?) as in, it's not meant for this kind of syncing and this often, so > there's actually nothing 'wrong'. > > -- > ---------------------- > Joar Jegleim > Quick and dirty reply, what's your pool usage % ? >75-80% an performance takes a dive. Let's just make sure you're not there yet. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 11:14:40 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id A628A866 for ; Fri, 5 Apr 2013 11:14:40 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay04.ispgateway.de (smtprelay04.ispgateway.de [80.67.18.16]) by mx1.freebsd.org (Postfix) with ESMTP id 6C074F78 for ; Fri, 5 Apr 2013 11:14:40 +0000 (UTC) Received: from [87.79.197.81] (helo=fabiankeil.de) by smtprelay04.ispgateway.de with esmtpsa (SSLv3:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1UO4K0-0005X0-I8; Fri, 05 Apr 2013 12:56:40 +0200 Date: Fri, 5 Apr 2013 12:56:40 +0200 From: Fabian Keil To: freebsd-fs@freebsd.org Subject: Re: Error ZFS-8000-8A on 9.1 under VMware Message-ID: <20130405125635.41478c64@fabiankeil.de> In-Reply-To: <20130405131954.60183@relay.ibs.dn.ua> References: <20130405131954.60183@relay.ibs.dn.ua> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/n3vvZPfS.kyonTgUyN.Rc1X"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 11:14:40 -0000 --Sig_/n3vvZPfS.kyonTgUyN.Rc1X Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Zeus Panchenko wrote: > I face weird situation ... may somebody advice, please? >=20 > Under VMware ESX 4.1.2 two FreeBSD-9.1-amd64-on-ZFS boxes were set. > on both of them open-vm-tools are installed >=20 > after some time of correct functionality, I'm getting error ZFS-8000-8A > on both of the boxes ... >=20 > after corrupted files removal and `zpool clean' no errors reported by > `zpool scrub' Are you sure the "Permanent errors" are (or were) actually permanent? If they are, you should get an error message when cat'ing the supposedly affected files to /dev/null and the files should show up again when scrubbing the pool without deleting any files first. Depending on the error, reading the files again can also trigger kernel messages that could be useful to analyse the problem. I'm not using VMware, but in my experience ZFS treats some temporary errors in non-redundant configurations as permanent until the pool is scrubbed. It happens rarely, so I haven't properly looked into this yet. Fabian --Sig_/n3vvZPfS.kyonTgUyN.Rc1X Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlFeregACgkQBYqIVf93VJ3IRACgji6sjP+71hkOXTdzBMZveuNo X/AAoLiMGNdRDOcdecJ0b41jopZlX4kf =hdi+ -----END PGP SIGNATURE----- --Sig_/n3vvZPfS.kyonTgUyN.Rc1X-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 11:36:47 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 87E17BA1 for ; Fri, 5 Apr 2013 11:36:47 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by mx1.freebsd.org (Postfix) with ESMTP id 1C7AAC1 for ; Fri, 5 Apr 2013 11:36:46 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mrbap1) with ESMTP (Nemesis) id 0LjuTB-1UvDqA2GM5-00bk8h; Fri, 05 Apr 2013 13:36:39 +0200 Message-ID: <515EB744.5000607@brockmann-consult.de> Date: Fri, 05 Apr 2013 13:36:36 +0200 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Damien Fleuriot Subject: Re: Regarding regular zfs References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> In-Reply-To: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> X-Enigmail-Version: 1.5.1 X-Provags-ID: V02:K0:Ww6NopP+qrPq1rVl65Iybul4SVwhphcMbVJSyIbNPR4 GzFaIakGklutqFuK2l6HGCkMC4ge/5V1z/wzneZUcO05bM55Jh uXgXxYXVMbCJfGL/MSoPUpj7PFMi8y8+BIRYzEG3Djrdl1kASJ lnRI0BzCQesTKFdnx5x/Fx3y0WW2dj8Q2EtWrsexoPcSyEcBCF lWnJTZG8wzjk4lun6GzSO7yam2rwUpCNtwfKQIbmeI84ueEb5x u0xRtBe960YgZoRrI904ksYmhqrXSNDY8La9A+Q6MWaLjeCBbm jrfRIGSFAu1W+Xd9JsTkw3+1MtsFyAeGtE1NmUOc9Ff5UG1Y9l Ct/hcD3pyNdJszr1lS/+4Dc/Qj1hDsJA8vXPXD0St Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 11:36:47 -0000 On 2013-04-05 13:07, Damien Fleuriot wrote: > -I've implemented mbuffer for the zfs send / receive operations. With > mbuffer the sync went a lot faster, but still got the same symptoms > when the zfs receive is done, the hang / unresponsiveness returns for > 5-20 seconds > -I've upgraded to 8.3-RELEASE ( + zpool upgrade and zfs upgrade to > V28), same symptoms > -I've upgraded to 9.1-RELEASE, still same symptoms > So my question(s) to the list would be: > In my setup have I taken the use case for zfs send / receive too far > (?) as in, it's not meant for this kind of syncing and this often, so > there's actually nothing 'wrong'. I do the same thing on an 8.3-STABLE system, with replication every 20 minutes (compared to your 15 minutes), and it has worked flawlessly for over a year. Before that point, it was hanging often, until I realized that all hangs were from when there was more than 1 writing "zfs" command running at the same time (snapshot, send, destroy, rename, etc. but not list, get, etc.). So now *all my scripts have a common lock between them* (just a pid file like in /var/run; cured the hangs), and I don't run manual zfs commands without stopping my cronjobs. If the hang was caused by a destroy or smething during a send, I think it would usually unhang when the send is done, do the destroy or whatever else was blocking, then be unhung completely, smoothly working. In other cases, I think it would be deadlocked. NAME USED REFER USEDCHILD USEDDS USEDSNAP AVAIL MOUNTPOINT tank 38.5T 487G 37.4T 487G 635G 9.54T /tank tank/backup 7.55T 1.01T 5.08T 1.01T 1.46T 9.54T /tank/backup ... Sends are still quick with 38 T to send. The last replication run started 2013-04-05 13:20:00 +0200 and finished 2013-04-05 13:22:18 +0200. I have 234 snapshots at the moment (one per 20 min today + one daily for a few months). From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 12:02:50 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7E3B4E11 for ; Fri, 5 Apr 2013 12:02:50 +0000 (UTC) (envelope-from zeus@ibs.dn.ua) Received: from relay.ibs.dn.ua (relay.ibs.dn.ua [91.216.196.25]) by mx1.freebsd.org (Postfix) with ESMTP id 1208923F for ; Fri, 5 Apr 2013 12:02:49 +0000 (UTC) Received: from ibs.dn.ua (relay.ibs.dn.ua [91.216.196.25]) by relay.ibs.dn.ua with ESMTP id r35C2kaX072448; Fri, 5 Apr 2013 15:02:47 +0300 (EEST) Message-ID: <20130405150246.72446@relay.ibs.dn.ua> Date: Fri, 05 Apr 2013 15:02:46 +0300 From: Zeus Panchenko To: Subject: Re: Error ZFS-8000-8A on 9.1 under VMware In-reply-to: Your message of Fri, 5 Apr 2013 12:56:40 +0200 <20130405125635.41478c64@fabiankeil.de> References: <20130405131954.60183@relay.ibs.dn.ua> <20130405125635.41478c64@fabiankeil.de> Organization: I.B.S. LLC X-Mailer: MH-E 8.3.1; GNU Mailutils 2.99.98; GNU Emacs 24.0.93 X-Face: &sReWXo3Iwtqql1[My(t1Gkx; y?KF@KF`4X+'9Cs@PtK^y%}^.>Mtbpyz6U=,Op:KPOT.uG )Nvx`=er!l?WASh7KeaGhga"1[&yz$_7ir'cVp7o%CGbJ/V)j/=]vzvvcqcZkf; JDurQG6wTg+?/xA go`}1.Ze//K; Fk&/&OoHd'[b7iGt2UO>o(YskCT[_D)kh4!yY'<&:yt+zM=A`@`~9U+P[qS:f; #9z~ Or/Bo#N-'S'!'[3Wog'ADkyMqmGDvga?WW)qd=?)`Y&k=o}>!ST\ MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Zeus Panchenko List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 12:02:50 -0000 =2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Fabian Keil wrote: > Are you sure the "Permanent errors" are (or were) actually permanent? I am not ... here I rely only on `zpool status -v' output where the message appears until I delete the files, say `zpool clear' and scrub the pool ... > If they are, you should get an error message when cat'ing the > supposedly affected files to /dev/null and the files should show > up again when scrubbing the pool without deleting any files first. yes, they are ... here is one file example: # cat /usr/obj/lib32/usr/src/lib/libsmb/libsmb_p.a > /dev/null=20 cat: /usr/obj/lib32/usr/src/lib/libsmb/libsmb_p.a: Input/output error but in dmesg output is nothing related to the issue if I run scrub after just finished one, than CKSUM value increments # zpool status -v pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 2.50K in 0h4m with 6 errors on Tue Mar 26 20:40:26 2= 013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 7 gpt/disk0 ONLINE 0 0 15 errors: Permanent errors have been detected in the following files: /usr/obj/usr/src/tmp/legacy/usr/bin/indxbib /usr/obj/usr/src/sys/HELP/modules/usr/src/sys/modules/drm/radeon/ra= deon.ko.debug /usr/obj/usr/src/kerberos5/lib/libasn1/.depend /usr/obj/lib32/usr/src/lib/libsmb/libsmb_p.a /usr/local/lib/libsicudata.a /usr/src/.svn/pristine/0f/0fc73fd8e2874afad8bdbf2d921dac0ef73ca859.= svn-base # zpool scrub zroot # zpool status -v pool: zroot state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 0h5m with 6 errors on Fri Apr 5 11:57:14 2013 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 13 gpt/disk0 ONLINE 0 0 27 errors: Permanent errors have been detected in the following files: /usr/obj/usr/src/tmp/legacy/usr/bin/indxbib /usr/obj/usr/src/sys/HELP/modules/usr/src/sys/modules/drm/radeon/ra= deon.ko.debug /usr/obj/usr/src/kerberos5/lib/libasn1/.depend /usr/obj/lib32/usr/src/lib/libsmb/libsmb_p.a /usr/local/lib/libsicudata.a /usr/src/.svn/pristine/0f/0fc73fd8e2874afad8bdbf2d921dac0ef73ca859.= svn-base =2D --=20 Zeus V. Panchenko jid:zeus@im.ibs.dn.ua IT Dpt., I.B.S. LLC GMT+2 (EET) =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlFevWYACgkQr3jpPg/3oyp17ACg6UGNhbQt8czCCxFjKiR8DLZ3 NIIAoJA2gN6p3kgfOXF2L//84p25tHkb =3DVf3V =2D----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 12:58:06 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 7ACF44A9 for ; Fri, 5 Apr 2013 12:58:06 +0000 (UTC) (envelope-from joar.jegleim@gmail.com) Received: from mail-we0-x236.google.com (mail-we0-x236.google.com [IPv6:2a00:1450:400c:c03::236]) by mx1.freebsd.org (Postfix) with ESMTP id 15D793FB for ; Fri, 5 Apr 2013 12:58:05 +0000 (UTC) Received: by mail-we0-f182.google.com with SMTP id k14so2889665wer.27 for ; Fri, 05 Apr 2013 05:58:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=fw7a1NNpMxNFOK950nQlk4YohmMort5XGSjuTBYOk40=; b=tfJV+Q8G6F+G8XGUOg2AyPYT5uMXeWYY4ujJ7mvSOBsLwOrMPnDxq8MeWcHbU/qBTR 3uGtMmBtNo86ti3kPfMX70CByM7bJFioJzRejMsrkqiY65AzdYsEttwzaqffdM9KbHNX ORtCuk55ubS0Uf3lrmFbNfB7dnsoev/h4qK75xJsZpGhghwSOMemc+LnTwgs0SzJJldD YVZC/kwBW1i/vMYpoQinW5uYc9G8xPj0IhhZDxRtMfpEW4wcNat7AvRX8QISUvGmB0ox srEhlZciAzA1iyl3HghbxHwtREUYgAhCpT1ueGNXphc5m31OEJMWKMh0+SOpvkVU8mXk pVYw== MIME-Version: 1.0 X-Received: by 10.194.82.104 with SMTP id h8mr16421522wjy.3.1365166685008; Fri, 05 Apr 2013 05:58:05 -0700 (PDT) Received: by 10.216.34.9 with HTTP; Fri, 5 Apr 2013 05:58:04 -0700 (PDT) In-Reply-To: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> Date: Fri, 5 Apr 2013 14:58:04 +0200 Message-ID: Subject: Re: Regarding regular zfs From: Joar Jegleim To: Damien Fleuriot Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 12:58:06 -0000 zpool usage is 9% :) -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- On 5 April 2013 13:07, Damien Fleuriot wrote: > > On 5 Apr 2013, at 12:17, Joar Jegleim wrote: > > > Hi FreeBSD ! > > > > I've already sent this one to questions@freebsd.org, but realised this > list > > would be a better option. > > > > So I've got this setup where we have a storage server delivering about > > 2 million jpeg's as a backend for a website ( it's ~1TB of data) > > The storage server is running zfs and every 15 minutes it does a zfs > > send to a 'slave', and our proxy will fail over to the slave if the > > main storage server goes down . > > I've got this script that initially zfs send's a whole zfs volume, and > > for every send after that only sends the diff . So after the initial zfs > > send, the diff's usually take less than a minute to send over. > > > > I've had increasing problems on the 'slave', it seem to grind to a > > halt for anything between 5-20 seconds after every zfs receive . > Everything > > on the server halts / hangs completely. > > > > I've had a couple go's on trying to solve / figure out what's > > happening without luck, and this 3rd time I've invested even more time > > on the problem . > > > > To sum it up: > > -Server was initially on 8.2-RELEASE > > -I've set some sysctl variables such as: > > > > # 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' > > situations, suspect zfs.arc ate too much memory) > > vfs.zfs.arc_max=17179869184 > > > > # 8.2 default to 30 here, setting it to 5 which is default from 8.3 and > > onwards > > vfs.zfs.txg.timeout="5" > > > > # Set TXG write limit to a lower threshold. This helps "level out" > > # the throughput rate (see "zpool iostat"). A value of 256MB works well > > # for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on > > # disks which have 64 MB cache. <
> > > # NOTE: in 'vfs.zfs.txg.write_limit_override'. > > #vfs.zfs.txg.write_limit_override=1073741824 # for 8.2 > > vfs.zfs.write_limit_override=1073741824 # for 8.3 and above > > > > -I've implemented mbuffer for the zfs send / receive operations. With > > mbuffer the sync went a lot faster, but still got the same symptoms > > when the zfs receive is done, the hang / unresponsiveness returns for > > 5-20 seconds > > -I've upgraded to 8.3-RELEASE ( + zpool upgrade and zfs upgrade to > > V28), same symptoms > > -I've upgraded to 9.1-RELEASE, still same symptoms > > > > The period where the server is unresponsive after a zfs receive, I > > suspected it would correlate with the amount of data being sent, but > > even if there is only a couple MB's data the hang / unresponsiveness > > is still substantial . > > > > I suspect it may have something to do with the zfs volume being sent > > is mount'ed on the slave, and I'm also doing the backups from the > > slave, which means a lot of the time the backup server is rsyncing the > > zfs volume being updated. > > I've noticed that the unresponsiveness / hang situations occur while > > the backupserver is rsync'ing from the zfs volume being updated, when > > the backupserver is 'done' and nothing is working with files in the > > zfs volume being updated i hardly notice any of the symptoms (mabye > > just a minor lag for much less than a second, hardly noticeable) . > > > > So my question(s) to the list would be: > > In my setup have I taken the use case for zfs send / receive too far > > (?) as in, it's not meant for this kind of syncing and this often, so > > there's actually nothing 'wrong'. > > > > -- > > ---------------------- > > Joar Jegleim > > > > Quick and dirty reply, what's your pool usage % ? > > >75-80% an performance takes a dive. > > Let's just make sure you're not there yet. > From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 13:02:14 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1C9BE594 for ; Fri, 5 Apr 2013 13:02:14 +0000 (UTC) (envelope-from joar.jegleim@gmail.com) Received: from mail-wi0-x229.google.com (mail-wi0-x229.google.com [IPv6:2a00:1450:400c:c05::229]) by mx1.freebsd.org (Postfix) with ESMTP id AB0BD5FB for ; Fri, 5 Apr 2013 13:02:13 +0000 (UTC) Received: by mail-wi0-f169.google.com with SMTP id c10so1763389wiw.2 for ; Fri, 05 Apr 2013 06:02:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=eqo+S71j2X/nCcHkciTRMzg+yLNKRi6pml4G7XCb1OQ=; b=J/QbZ0GRk6nhMweo7iCV3VIUBm51xYbIh/dP9n8CvOzIEOYHHfR0NLIw4wV5aytuAR sn3beT0X0G0Hu7YeWzgZsxAvSJX+SgH00MNqKGpgXFvihnzoqErkm1nO1/xc7BHXVy+J 0+gJL1sea+WH1vmVIatWgBPF4f9VtCFcirldL2O6uHNskFvR/iBEkgcAmCdNvaYSafIw XkUPlDXFl65zdRpc8+TKncY00OAB0/MA4AI0Ow7W2c5HvvaMdpwomUIUvDIJyjPI3Qp+ GT6abY4no2o/vVgEgwNKhYMnP4eycVY0l+V2dqADirdZUxjoBqISISp9ZgEUrlp+swCq +pTQ== MIME-Version: 1.0 X-Received: by 10.180.92.229 with SMTP id cp5mr4079652wib.20.1365166932680; Fri, 05 Apr 2013 06:02:12 -0700 (PDT) Received: by 10.216.34.9 with HTTP; Fri, 5 Apr 2013 06:02:12 -0700 (PDT) In-Reply-To: <515EB744.5000607@brockmann-consult.de> References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> <515EB744.5000607@brockmann-consult.de> Date: Fri, 5 Apr 2013 15:02:12 +0200 Message-ID: Subject: Re: Regarding regular zfs From: Joar Jegleim To: Peter Maloney Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 13:02:14 -0000 You make some interesting points . I don't _think_ the script 'causes more than 1 zfs write at a time, and I'm sure 'nothing else' is doing that neither . But I'm gonna check that out because it does sound like a logical explanation. I'm wondering if the rsync from the receiving server (that is: the backup server is doing rsync from the zfs receive server) could 'cause the same problem, it's only reading though ... -- ---------------------- Joar Jegleim Homepage: http://cosmicb.no Linkedin: http://no.linkedin.com/in/joarjegleim fb: http://www.facebook.com/joar.jegleim AKA: CosmicB @Freenode ---------------------- On 5 April 2013 13:36, Peter Maloney wrote: > On 2013-04-05 13:07, Damien Fleuriot wrote: > > -I've implemented mbuffer for the zfs send / receive operations. With > mbuffer the sync went a lot faster, but still got the same symptoms > when the zfs receive is done, the hang / unresponsiveness returns for > 5-20 seconds > -I've upgraded to 8.3-RELEASE ( + zpool upgrade and zfs upgrade to > V28), same symptoms > -I've upgraded to 9.1-RELEASE, still same symptoms > > > So my question(s) to the list would be: > In my setup have I taken the use case for zfs send / receive too far > (?) as in, it's not meant for this kind of syncing and this often, so > there's actually nothing 'wrong'. > > > I do the same thing on an 8.3-STABLE system, with replication every 20 > minutes (compared to your 15 minutes), and it has worked flawlessly for > over a year. Before that point, it was hanging often, until I realized that > all hangs were from when there was more than 1 writing "zfs" command > running at the same time (snapshot, send, destroy, rename, etc. but not > list, get, etc.). So now *all my scripts have a common lock between them*(just a pid file like in /var/run; cured the hangs), and I don't run manual > zfs commands without stopping my cronjobs. If the hang was caused by a > destroy or smething during a send, I think it would usually unhang when the > send is done, do the destroy or whatever else was blocking, then be unhung > completely, smoothly working. In other cases, I think it would be > deadlocked. > > > NAME USED REFER USEDCHILD USEDDS USEDSNAP > AVAIL MOUNTPOINT > tank 38.5T 487G 37.4T 487G 635G > 9.54T /tank > tank/backup 7.55T 1.01T 5.08T 1.01T 1.46T > 9.54T /tank/backup > ... > > Sends are still quick with 38 T to send. The last replication run started > 2013-04-05 13:20:00 +0200 and finished 2013-04-05 13:22:18 +0200. I have > 234 snapshots at the moment (one per 20 min today + one daily for a few > months). > > From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 14:07:37 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id BB0F7271 for ; Fri, 5 Apr 2013 14:07:37 +0000 (UTC) (envelope-from ronald-freebsd8@klop.yi.org) Received: from smarthost1.greenhost.nl (smarthost1.greenhost.nl [195.190.28.78]) by mx1.freebsd.org (Postfix) with ESMTP id 7DFA289F for ; Fri, 5 Apr 2013 14:07:37 +0000 (UTC) Received: from smtp.greenhost.nl ([213.108.104.138]) by smarthost1.greenhost.nl with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1UO7Ie-0003Gk-7i; Fri, 05 Apr 2013 16:07:29 +0200 Received: from [81.21.138.17] (helo=ronaldradial.versatec.local) by smtp.greenhost.nl with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1UO7Ie-0007tK-0b; Fri, 05 Apr 2013 16:07:28 +0200 Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes To: "Peter Maloney" , "Joar Jegleim" Subject: Re: Regarding regular zfs References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> <515EB744.5000607@brockmann-consult.de> Date: Fri, 05 Apr 2013 16:07:26 +0200 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Ronald Klop" Message-ID: In-Reply-To: User-Agent: Opera Mail/12.15 (Win32) X-Virus-Scanned: by clamav at smarthost1.samage.net X-Spam-Level: / X-Spam-Score: -0.5 X-Spam-Status: No, score=-0.5 required=5.0 tests=BAYES_05 autolearn=disabled version=3.3.1 X-Scan-Signature: 71684ae416b12bc74806129cb02de027 Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 14:07:37 -0000 On Fri, 05 Apr 2013 15:02:12 +0200, Joar Jegleim wrote: > You make some interesting points . > I don't _think_ the script 'causes more than 1 zfs write at a time, and > I'm > sure 'nothing else' is doing that neither . But I'm gonna check that out > because it does sound like a logical explanation. > I'm wondering if the rsync from the receiving server (that is: the backup > server is doing rsync from the zfs receive server) could 'cause the same > problem, it's only reading though ... > > > Do you run the rsync from a snapshot or from the 'live' filesystem? The live one changes during zfs receive. I don't know if that has anything to do with your problem, but rsync from a snapshot gives a consistent backup anyway. BTW: It is probably more simple for you to test if the rsync is related to the problem, than for other people to theorize about it here. Ronald. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 14:42:41 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4040694F; Fri, 5 Apr 2013 14:42:41 +0000 (UTC) (envelope-from jayb@braeburn.org) Received: from nbfkord-smmo06.seg.att.com (nbfkord-smmo06.seg.att.com [209.65.160.94]) by mx1.freebsd.org (Postfix) with ESMTP id DA657A72; Fri, 5 Apr 2013 14:42:40 +0000 (UTC) Received: from unknown [144.160.20.145] (EHLO nbfkord-smmo06.seg.att.com) by nbfkord-smmo06.seg.att.com(mxl_mta-6.15.0-1) with ESMTP id 0e2ee515.2aaaf9a28940.122692.00-547.338321.nbfkord-smmo06.seg.att.com (envelope-from ); Fri, 05 Apr 2013 14:42:40 +0000 (UTC) X-MXL-Hash: 515ee2e05b3c0712-c82793b61271cd8ccf89d2f2cbc647dc9a3f93b2 Received: from unknown [144.160.20.145] (EHLO mlpd192.enaf.sfdc.sbc.com) by nbfkord-smmo06.seg.att.com(mxl_mta-6.15.0-1) over TLS secured channel with ESMTP id dc2ee515.0.122525.00-425.337802.nbfkord-smmo06.seg.att.com (envelope-from ); Fri, 05 Apr 2013 14:42:22 +0000 (UTC) X-MXL-Hash: 515ee2ce72c8190a-f84b9012b400db2181a2180e0377929a3e178123 Received: from enaf.sfdc.sbc.com (localhost.localdomain [127.0.0.1]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r35EgLDx025777; Fri, 5 Apr 2013 10:42:21 -0400 Received: from alpi133.aldc.att.com (alpi133.aldc.att.com [130.8.217.3]) by mlpd192.enaf.sfdc.sbc.com (8.14.5/8.14.5) with ESMTP id r35Eg8A6025502 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 5 Apr 2013 10:42:14 -0400 Received: from alpi153.aldc.att.com (alpi153.aldc.att.com [130.8.42.31]) by alpi133.aldc.att.com (RSA Interceptor); Fri, 5 Apr 2013 15:42:01 +0100 Received: from aldc.att.com (localhost [127.0.0.1]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r35Eg1tT019454; Fri, 5 Apr 2013 10:42:01 -0400 Received: from oz.mt.att.com (oz.mt.att.com [135.16.165.23]) by alpi153.aldc.att.com (8.14.5/8.14.5) with ESMTP id r35Efv1Y019224; Fri, 5 Apr 2013 10:41:58 -0400 Received: by oz.mt.att.com (Postfix, from userid 1000) id 297B7680B94; Fri, 5 Apr 2013 10:41:57 -0400 (EDT) X-Mailer: emacs 23.3.1 (via feedmail 8 I); VM 8.2.0b under 23.3.1 (i686-pc-linux-gnu) Message-ID: <20830.58036.422569.831143@oz.mt.att.com> Date: Fri, 5 Apr 2013 10:41:56 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit From: Jay Borkenhagen To: Darren Pilgrim Subject: Re: mounting failed with error 2 In-Reply-To: <515E0696.2020901@bluerosetech.com> References: <20825.59038.104304.161698@oz.mt.att.com> <515DB070.1090803@bluerosetech.com> <20829.52349.314652.424391@oz.mt.att.com> <515E0696.2020901@bluerosetech.com> X-GPG-Fingerprint: DDDB 542E D988 94D0 82D3 D198 7DED 6648 2308 D3C0 X-RSA-Inspected: yes X-RSA-Classifications: public Cc: fs@freebsd.org, Niclas Zeising X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list Reply-To: Jay Borkenhagen List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 14:42:41 -0000 Darren Pilgrim writes: > On 4/4/2013 11:54 AM, Jay Borkenhagen wrote: > > Darren Pilgrim writes: > > > Reboot using the install disk, go to the Live CD. Import the pool using > > > an altroot and a cachefile: > > > > > > zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot > > > > > > Copy /tmp/zpool.cache to /mnt/ROOT/boot/zfs/zpool.cache, then reboot > > > *without* exporting the pool. > > > > Thanks, Darren. > > > > However, when I try that I get this: > > > > --------------- > > root@:/root # zpool import -c /tmp/zpool.cache -o altroot=/mnt zroot > > failed to open cache file: No such file or directory > > cannot import 'zroot': no such pool available > > root@:/root # > > --------------- > > Sorry, wrong cachefile option, try: > > zpool import -o cachefile=/tmp/zpool.cache -o altroot=/mnt zroot OK: --------------- root@:/root # zpool status ZFS filesystem version 5 ZFS storage pool version 28 no pools available root@:/root # zpool import -o cachefile=/tmp/zpool.cache -o altroot=/mnt zroot root@:/root # root@:/root # cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache root@:/root # root@:/root # shutdown -r now --------------- ... and now the system boots from the GPT/ZFS, and everything looks great. Thanks!! I see that the previous version of Niclas's instructions (not sure if those are still available on the web, but I have hardcopy) included exactly that 'zpool import' command immediately following 'zpool export zroot' which is now the final pre-reboot step. I guess that statement should not have been removed from the procedure. Thanks again! Jay B. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 14:49:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 73E2FAC9; Fri, 5 Apr 2013 14:49:18 +0000 (UTC) (envelope-from ler@lerctr.org) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) by mx1.freebsd.org (Postfix) with ESMTP id 473A3AD7; Fri, 5 Apr 2013 14:49:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iiKsWe6Jzdq0S/1BI7w/ICasfJkuVF8QqyD+Y8DPfKA=; b=io3Tm6ROfPdAPJpWdonH6NjyObnaL0jlzXxyrdsocoxWMzRqBgutjCn1rFLjrYUPflwti38cwFFLNvXC7C7RSkPrn9xPnMgdOdXPoCLAhbkfg8C8ZSkWQ8pip1D2PXcfKSgLlwW9vlrCP4m7DKrtce6DyhbiN8oIpdKXrCH+VcA=; Received: from localhost.lerctr.org ([127.0.0.1]:55262 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpa (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UO7x7-0001NS-8k; Fri, 05 Apr 2013 09:49:17 -0500 Received: from [32.97.110.60] by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Fri, 05 Apr 2013 09:49:17 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 05 Apr 2013 09:49:17 -0500 From: Larry Rosenman To: Martin Matuska Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT In-Reply-To: <515B4CFA.9080706@FreeBSD.org> References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> Message-ID: <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/0.8.5 X-Spam-Score: -5.3 (-----) X-LERCTR-Spam-Score: -5.3 (-----) X-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 X-LERCTR-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 14:49:18 -0000 On 2013-04-02 16:26, Martin Matuska wrote: > On 1. 4. 2013 22:33, Martin Matuska wrote: >> This error seems to be limited to sending deduplicated streams. Does >> sending without "-D" work ok? This might be a vendor error as well. >> >> On 1.4.2013 20:05, Larry Rosenman wrote: >>> Re-Sending. Any ideas, guys/gals? >>> >>> This really gets in my way. >>> > This may be also related to: > http://www.freebsd.org/cgi/query-pr.cgi?pr=176978 Taking off -D does get around the panic. What information can I provide to help fix it? I *CAN* provide access to both sides via SSH. -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893 From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 14:59:41 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 04A7BE35 for ; Fri, 5 Apr 2013 14:59:41 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by mx1.freebsd.org (Postfix) with ESMTP id 7F1DFB55 for ; Fri, 5 Apr 2013 14:59:40 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0LwVC7-1UgKgv3TAt-017kz2; Fri, 05 Apr 2013 16:59:38 +0200 Message-ID: <515EE6D9.8050605@brockmann-consult.de> Date: Fri, 05 Apr 2013 16:59:37 +0200 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Ronald Klop Subject: Re: Regarding regular zfs References: <8B0FFF01-B8CC-41C0-B0A2-58046EA4E998@my.gd> <515EB744.5000607@brockmann-consult.de> In-Reply-To: X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:DpPcoJf3XUZrn3D4ybLZgywMOVtj+xtLs/Rn2TIoDfW 1Y1KB4Slp+BNsO7MHahIMsBhefmqC+qfoJQIYnBoDseA15qCU+ 5twiEUkGjT9xo0DoXBda3E6rUA8eiL71vw9VobqFWwGZXSrE09 A+abL6PDbc29zqGKmgcqpojiFKUTt6QYnUiIYEwadlKriRMG4f uwybz7UKkefg93AXu7jOKGhUAJQstcJEP9UJfSY6nBVXHOr2qR 5LMbjkjTaOIJi2JxTXzGhk3TONxXp6gvxbMISYmJoR+JV9oCE7 ocuspQvzDIlmnGDBLDQqSpjttTdaO8FUWcmcCH9Wy4EONK3P5h Fa8TqjQGzq+dkt8ndJCa7d3WnAu5bMTUceYwbUZJO Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 14:59:41 -0000 On 2013-04-05 16:07, Ronald Klop wrote: > On Fri, 05 Apr 2013 15:02:12 +0200, Joar Jegleim > wrote: > >> You make some interesting points . >> I don't _think_ the script 'causes more than 1 zfs write at a time, >> and I'm >> sure 'nothing else' is doing that neither . But I'm gonna check that out >> because it does sound like a logical explanation. >> I'm wondering if the rsync from the receiving server (that is: the >> backup >> server is doing rsync from the zfs receive server) could 'cause the same >> problem, it's only reading though ... >> >> >> > > Do you run the rsync from a snapshot or from the 'live' filesystem? > The live one changes during zfs receive. I don't know if that has > anything to do with your problem, but rsync from a snapshot gives a > consistent backup anyway. > > BTW: It is probably more simple for you to test if the rsync is > related to the problem, than for other people to theorize about it here. > > Ronald. Also I don't believe using rsync either on the snapshot or the file system (read or write) should be related in any way to the hang I described. I let my cronjob rsync backups run wild without issues. When I say zfs commands, I don't mean random other commands on the zfs file system, but only the "zfs" command with a writing subcommand, such as destroy, recv, or snapshot, which obviously need some locking, and also send which locks some things, such as preventing the snapshot you are sending from being removed while you send it. Next time it hangs, just run something like: ps axl | grep zfs. If you see 2 zfs commands running at once that aren't parent/child of eachother, then you may have the same problem I described. If not (such as if you see your send + rsync at the same time), then it is something else. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 15:46:09 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 872DBA8F for ; Fri, 5 Apr 2013 15:46:09 +0000 (UTC) (envelope-from mcdouga9@egr.msu.edu) Received: from mail.egr.msu.edu (hill.egr.msu.edu [35.9.37.162]) by mx1.freebsd.org (Postfix) with ESMTP id 428EAF34 for ; Fri, 5 Apr 2013 15:46:08 +0000 (UTC) Received: from hill (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id 1B69942444 for ; Fri, 5 Apr 2013 11:46:02 -0400 (EDT) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by hill (hill.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gD-h4wyl0t1R for ; Fri, 5 Apr 2013 11:46:01 -0400 (EDT) Received: from EGR authenticated sender Message-ID: <515EF1B9.1010505@egr.msu.edu> Date: Fri, 05 Apr 2013 11:46:01 -0400 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130309 Thunderbird/17.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Regarding regular zfs References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 15:46:09 -0000 On 04/05/13 06:17, Joar Jegleim wrote: > Hi FreeBSD ! > > I've already sent this one to questions@freebsd.org, but realised this list > would be a better option. > > So I've got this setup where we have a storage server delivering about > 2 million jpeg's as a backend for a website ( it's ~1TB of data) > The storage server is running zfs and every 15 minutes it does a zfs > send to a 'slave', and our proxy will fail over to the slave if the > main storage server goes down . > I've got this script that initially zfs send's a whole zfs volume, and > for every send after that only sends the diff . So after the initial zfs > send, the diff's usually take less than a minute to send over. > > I've had increasing problems on the 'slave', it seem to grind to a > halt for anything between 5-20 seconds after every zfs receive . Everything > on the server halts / hangs completely. > > I've had a couple go's on trying to solve / figure out what's > happening without luck, and this 3rd time I've invested even more time > on the problem . > > To sum it up: > -Server was initially on 8.2-RELEASE > -I've set some sysctl variables such as: > > # 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' > situations, suspect zfs.arc ate too much memory) > vfs.zfs.arc_max=17179869184 What is vm.kmem_size? I suggest setting it to 2x physical ram (60G) but higher won't hurt because recent-ish kernels will properly cap it at 2x. This will allow more space in kmem to reduce ARC fragmentation in memory which can cause stalls or panics. Also, I would set vfs.zfs.arc_max almost as high as your physical memory (or not at all) unless you need to reserve some for non-zfs usage. With ZFS from the latest year or two, I generally don't adjust anything other than making sure vm.kmem_size is sufficiently high on all systems, and some systems I use arc_max. Rarely arc_min but in an attempt to encourage ARC use. By avoiding ARC fragmentation, I generally feel more ARC is better than less. Also keep an eye on kstat.zfs.misc.arcstats.size when you run zfs recv to see how it behaves. > > # 8.2 default to 30 here, setting it to 5 which is default from 8.3 and > onwards > vfs.zfs.txg.timeout="5" > > # Set TXG write limit to a lower threshold. This helps "level out" > # the throughput rate (see "zpool iostat"). A value of 256MB works well > # for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on > # disks which have 64 MB cache. <
> > # NOTE: in #vfs.zfs.txg.write_limit_override=1073741824 # for 8.2 > vfs.zfs.write_limit_override=1073741824 # for 8.3 and above > > -I've implemented mbuffer for the zfs send / receive operations. With > mbuffer the sync went a lot faster, but still got the same symptoms > when the zfs receive is done, the hang / unresponsiveness returns for > 5-20 seconds > -I've upgraded to 8.3-RELEASE ( + zpool upgrade and zfs upgrade to > V28), same symptoms > -I've upgraded to 9.1-RELEASE, still same symptoms > I think it is good to be on 9.x to get ZFS fixes before 8. From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 16:29:18 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E76F937D; Fri, 5 Apr 2013 16:29:18 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [IPv6:2a01:4f8:150:6101::4]) by mx1.freebsd.org (Postfix) with ESMTP id A991E197; Fri, 5 Apr 2013 16:29:18 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id 37F3D36296; Fri, 5 Apr 2013 18:29:17 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id R7eDwxJXQqov; Fri, 5 Apr 2013 18:29:13 +0200 (CEST) Received: from [10.9.8.1] (chello085216226145.chello.sk [85.216.226.145]) by mail.vx.sk (Postfix) with ESMTPSA id 3FA9B36284; Fri, 5 Apr 2013 18:29:11 +0200 (CEST) Message-ID: <515EFBD8.50900@FreeBSD.org> Date: Fri, 05 Apr 2013 18:29:12 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Larry Rosenman Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> In-Reply-To: <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> X-Enigmail-Version: 1.5.1 Content-Type: multipart/mixed; boundary="------------030409060808020909030202" Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 16:29:19 -0000 This is a multi-part message in MIME format. --------------030409060808020909030202 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit You can use the attached patch, it should fix the problem. We are still waiting for code review and a final solution by illumos, maybe I will commit this preliminary (or final) fix to head. mm On 5.4.2013 16:49, Larry Rosenman wrote: > On 2013-04-02 16:26, Martin Matuska wrote: >> On 1. 4. 2013 22:33, Martin Matuska wrote: >>> This error seems to be limited to sending deduplicated streams. Does >>> sending without "-D" work ok? This might be a vendor error as well. >>> >>> On 1.4.2013 20:05, Larry Rosenman wrote: >>>> Re-Sending. Any ideas, guys/gals? >>>> >>>> This really gets in my way. >>>> >> This may be also related to: >> http://www.freebsd.org/cgi/query-pr.cgi?pr=176978 > Taking off -D does get around the panic. > > What information can I provide to help fix it? > > I *CAN* provide access to both sides via SSH. > > -- Martin Matuska FreeBSD committer http://blog.vx.sk --------------030409060808020909030202 Content-Type: text/plain; charset=windows-1250; name="dmu_send.c.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="dmu_send.c.patch" Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c =================================================================== --- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c (revision 249165) +++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_send.c (working copy) @@ -990,6 +990,7 @@ free_guid_map_onexit(void *arg) while ((gmep = avl_destroy_nodes(ca, &cookie)) != NULL) { dsl_dataset_long_rele(gmep->gme_ds, gmep); + dsl_dataset_rele(gmep->gme_ds, FTAG); kmem_free(gmep, sizeof (guid_map_entry_t)); } avl_destroy(ca); @@ -1698,7 +1699,6 @@ add_ds_to_guidmap(const char *name, avl_tree_t *gu gmep->gme_ds = snapds; avl_add(guid_map, gmep); dsl_dataset_long_hold(snapds, gmep); - dsl_dataset_rele(snapds, FTAG); } dsl_pool_rele(dp, FTAG); --------------030409060808020909030202-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 16:31:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 59D48520 for ; Fri, 5 Apr 2013 16:31:05 +0000 (UTC) (envelope-from mcdouga9@egr.msu.edu) Received: from mail.egr.msu.edu (dauterive.egr.msu.edu [35.9.37.168]) by mx1.freebsd.org (Postfix) with ESMTP id 34CD51C5 for ; Fri, 5 Apr 2013 16:31:04 +0000 (UTC) Received: from dauterive (localhost [127.0.0.1]) by mail.egr.msu.edu (Postfix) with ESMTP id EB8114016B for ; Fri, 5 Apr 2013 12:22:53 -0400 (EDT) X-Virus-Scanned: amavisd-new at egr.msu.edu Received: from mail.egr.msu.edu ([127.0.0.1]) by dauterive (dauterive.egr.msu.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ba6FBzmSdb60 for ; Fri, 5 Apr 2013 12:22:53 -0400 (EDT) Received: from EGR authenticated sender Message-ID: <515EFA56.5030008@egr.msu.edu> Date: Fri, 05 Apr 2013 12:22:46 -0400 From: Adam McDougall User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130309 Thunderbird/17.0.4 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: mounting failed with error 2 References: <20825.59038.104304.161698@oz.mt.att.com> <515DB070.1090803@bluerosetech.com> <20829.52349.314652.424391@oz.mt.att.com> <515E0696.2020901@bluerosetech.com> <20830.58036.422569.831143@oz.mt.att.com> In-Reply-To: <20830.58036.422569.831143@oz.mt.att.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 16:31:05 -0000 On 04/05/13 10:41, Jay Borkenhagen wrote: > Darren Pilgrim writes: > > On 4/4/2013 11:54 AM, Jay Borkenhagen wrote: > > > Darren Pilgrim writes: > > > > Reboot using the install disk, go to the Live CD. Import the pool using > > > > an altroot and a cachefile: > > OK: > > --------------- > root@:/root # zpool status > ZFS filesystem version 5 > ZFS storage pool version 28 > no pools available > root@:/root # zpool import -o cachefile=/tmp/zpool.cache -o altroot=/mnt zroot > root@:/root # > root@:/root # cp /tmp/zpool.cache /mnt/boot/zfs/zpool.cache > root@:/root # > root@:/root # shutdown -r now > > --------------- > > ... and now the system boots from the GPT/ZFS, and everything looks > great. > > Thanks!! > > I see that the previous version of Niclas's instructions (not sure if > those are still available on the web, but I have hardcopy) included > exactly that 'zpool import' command immediately following 'zpool > export zroot' which is now the final pre-reboot step. I guess that > statement should not have been removed from the procedure. > > Thanks again! > > Jay B. > This is sage advice. A number of times I even thought I set it up right initially but resorted to this "repair" method to get booting working. It gets even easier in newer releases. In 9.x as of Sat Nov 24 12:37:37 2012 SVN rev 243480, zpool.cache is not required at all unless you have non-boot pools. In 9.x as of Fri Jun 29 10:22:20 2012 SVN rev 237767 you don't need vfs.root.mountfrom in /boot/loader.conf. Both of these changes are in 9.1-release and make booting less complicated, more resilient and multibooting more flexible. Thanks to Andriy Gapon! From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 16:31:20 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 62955595; Fri, 5 Apr 2013 16:31:20 +0000 (UTC) (envelope-from ler@lerctr.org) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) by mx1.freebsd.org (Postfix) with ESMTP id 3539D1CD; Fri, 5 Apr 2013 16:31:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=fh96BkWZySJA6s26iCZjXoEy0Ket6ofqyiY8OA+SNGo=; b=uye712TN/sJXaDPk5t/hxuLx/lX00hrhZeazk92AVeqwxPQ+S1UitfgKL8FBYsgWwss2FV8V0k7uxTnwXu+72zRxa4mUexDWTTjr36rIgpP8urEa0kwB+OFE/rw7tVg6YGoAU885na+ZsiYfKG4lyV/OTADzSjZpca09Wi3Dey8=; Received: from localhost.lerctr.org ([127.0.0.1]:56915 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpa (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UO9Xq-0002S2-Il; Fri, 05 Apr 2013 11:31:19 -0500 Received: from [32.97.110.60] by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Fri, 05 Apr 2013 11:31:18 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Date: Fri, 05 Apr 2013 11:31:18 -0500 From: Larry Rosenman To: Martin Matuska Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT In-Reply-To: <515EFBD8.50900@FreeBSD.org> References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> <515EFBD8.50900@FreeBSD.org> Message-ID: <62827019d058b2a905a0d476565b56c0@webmail.lerctr.org> X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/0.8.5 X-Spam-Score: -5.3 (-----) X-LERCTR-Spam-Score: -5.3 (-----) X-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 X-LERCTR-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 16:31:20 -0000 On 2013-04-05 11:29, Martin Matuska wrote: > You can use the attached patch, it should fix the problem. > We are still waiting for code review and a final solution by illumos, > maybe I will commit this preliminary (or final) fix to head. Which side does this need to be on? (sending? (since it's dmu_send)? (which in my case is 8-STABLE) > > mm > > On 5.4.2013 16:49, Larry Rosenman wrote: >> On 2013-04-02 16:26, Martin Matuska wrote: >>> On 1. 4. 2013 22:33, Martin Matuska wrote: >>>> This error seems to be limited to sending deduplicated streams. >>>> Does >>>> sending without "-D" work ok? This might be a vendor error as well. >>>> >>>> On 1.4.2013 20:05, Larry Rosenman wrote: >>>>> Re-Sending. Any ideas, guys/gals? >>>>> >>>>> This really gets in my way. >>>>> >>> This may be also related to: >>> http://www.freebsd.org/cgi/query-pr.cgi?pr=176978 >> Taking off -D does get around the panic. >> >> What information can I provide to help fix it? >> >> I *CAN* provide access to both sides via SSH. >> >> -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893 From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 16:33:01 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 70E2872E; Fri, 5 Apr 2013 16:33:01 +0000 (UTC) (envelope-from mm@FreeBSD.org) Received: from mail.vx.sk (mail.vx.sk [176.9.45.25]) by mx1.freebsd.org (Postfix) with ESMTP id 339341FB; Fri, 5 Apr 2013 16:33:01 +0000 (UTC) Received: from core.vx.sk (localhost [127.0.0.2]) by mail.vx.sk (Postfix) with ESMTP id 32C9D3643A; Fri, 5 Apr 2013 18:32:54 +0200 (CEST) X-Virus-Scanned: amavisd-new at mail.vx.sk Received: from mail.vx.sk by core.vx.sk (amavisd-new, unix socket) with LMTP id 7EzwkIVhWlDX; Fri, 5 Apr 2013 18:32:52 +0200 (CEST) Received: from [10.9.8.1] (chello085216226145.chello.sk [85.216.226.145]) by mail.vx.sk (Postfix) with ESMTPSA id 026ED36431; Fri, 5 Apr 2013 18:32:51 +0200 (CEST) Message-ID: <515EFCB5.1060703@FreeBSD.org> Date: Fri, 05 Apr 2013 18:32:53 +0200 From: Martin Matuska User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Larry Rosenman Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> <515EFBD8.50900@FreeBSD.org> <62827019d058b2a905a0d476565b56c0@webmail.lerctr.org> In-Reply-To: <62827019d058b2a905a0d476565b56c0@webmail.lerctr.org> X-Enigmail-Version: 1.5.1 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 16:33:01 -0000 This is a patch against -CURRENT, so the receiving side in your case. On 5.4.2013 18:31, Larry Rosenman wrote: > On 2013-04-05 11:29, Martin Matuska wrote: >> You can use the attached patch, it should fix the problem. >> We are still waiting for code review and a final solution by illumos, >> maybe I will commit this preliminary (or final) fix to head. > > Which side does this need to be on? > > (sending? (since it's dmu_send)? > > (which in my case is 8-STABLE) -- Martin Matuska FreeBSD committer http://blog.vx.sk From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 19:07:11 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 6E1AD9C8 for ; Fri, 5 Apr 2013 19:07:11 +0000 (UTC) (envelope-from matthew.ahrens@delphix.com) Received: from mail-la0-x232.google.com (mail-la0-x232.google.com [IPv6:2a00:1450:4010:c03::232]) by mx1.freebsd.org (Postfix) with ESMTP id E76598DA for ; Fri, 5 Apr 2013 19:07:10 +0000 (UTC) Received: by mail-la0-f50.google.com with SMTP id ec20so3779698lab.37 for ; Fri, 05 Apr 2013 12:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphix.com; s=google; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=DrXZ67FhYBxCDstvsRgV4GImZWFlTEoJZ1/ZeaaEeRA=; b=BLs75w8atsBALAsGUkEhXtAHBQFoFm2DCcFpQAVEei3X9SoouTF3qny13tAUNlYRUT RndPdzZWVxxWqEoF3BLHYllMuZAPKG5dB2q3PC43CwmD/CdcByLETXxyz/K+PdZKeTOR wWv4ts/mAQuT/xh/ICxqaiLc5+8igactsag5U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:x-gm-message-state; bh=DrXZ67FhYBxCDstvsRgV4GImZWFlTEoJZ1/ZeaaEeRA=; b=I8CkRb1JSzE867YwbSHlQ6FwUVOlw9iTPvpvPF5+/0MayOpNl38BJP1aqugocJU50N X+Ejtow1TGsMVzeGegwsg/d88vIRtJM3Z4lGYzF3zigninAEMQ2vmutVwkNmsWXQ5XTj YG5/mLJKIwH0w/bGUE4SEqxmFFe+9bJ/l3AmCUBRDfM2aJi6XvXnz956U0j/U/NH1OOQ oyjKiHrmLnR9sdNbGRjkgw1GOfSFORONR/my11v0aurzMH0KvmPQppe14gxg18xKW1rA kw5oLjBiYNd25prlUVqa+Vx/tCj2v6eOi2FbyQ8wBCAlDsZdSQujg/8pPKOCpI83f34a 4GNw== MIME-Version: 1.0 X-Received: by 10.112.160.66 with SMTP id xi2mr6796996lbb.97.1365188829215; Fri, 05 Apr 2013 12:07:09 -0700 (PDT) Received: by 10.114.26.202 with HTTP; Fri, 5 Apr 2013 12:07:09 -0700 (PDT) In-Reply-To: <515EFBD8.50900@FreeBSD.org> References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> <515EFBD8.50900@FreeBSD.org> Date: Fri, 5 Apr 2013 12:07:09 -0700 Message-ID: Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT From: Matthew Ahrens To: Martin Matuska X-Gm-Message-State: ALoCoQk9pz3JtYoWO+BpIln4ivSuN4YQG6n/BHY6P0XyVW+8tA9wInJ4KTPA1HlxkLJ+zdipdD1S Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs , freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 19:07:11 -0000 I am working on integrating a fix into illumos. FYI, the patch you attached will not quite work, because you are using different tags to hold and release the dataset (FTAG is a macro which has a calling-function-specific value). --matt On Fri, Apr 5, 2013 at 9:29 AM, Martin Matuska wrote: > You can use the attached patch, it should fix the problem. > We are still waiting for code review and a final solution by illumos, > maybe I will commit this preliminary (or final) fix to head. > > mm > > On 5.4.2013 16:49, Larry Rosenman wrote: > > On 2013-04-02 16:26, Martin Matuska wrote: > >> On 1. 4. 2013 22:33, Martin Matuska wrote: > >>> This error seems to be limited to sending deduplicated streams. Does > >>> sending without "-D" work ok? This might be a vendor error as well. > >>> > >>> On 1.4.2013 20:05, Larry Rosenman wrote: > >>>> Re-Sending. Any ideas, guys/gals? > >>>> > >>>> This really gets in my way. > >>>> > >> This may be also related to: > >> http://www.freebsd.org/cgi/query-pr.cgi?pr=176978 > > Taking off -D does get around the panic. > > > > What information can I provide to help fix it? > > > > I *CAN* provide access to both sides via SSH. > > > > > > > -- > Martin Matuska > FreeBSD committer > http://blog.vx.sk > > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 19:14:08 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 40E37ACF; Fri, 5 Apr 2013 19:14:08 +0000 (UTC) (envelope-from ler@lerctr.org) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) by mx1.freebsd.org (Postfix) with ESMTP id 0CD1890E; Fri, 5 Apr 2013 19:14:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Message-ID:References:In-Reply-To:Subject:Cc:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=G/oXQpDNSosTluy7lZPshhLmzL3VrbMN6JBnqbkmN2I=; b=C1weHzuKzc7Pm/F6vibmyDJlV++xyBxBkYKUXY+oi4cMJto0lnMT2YzryDdc3mAOas4dwt8fLYKnafULwIzg4i6nKgl1V/kKnxYTdRLuRmaU62PTo+SOtmgtjxfsuVRLj0935yrgFAmMfYxdDs9qC2WRl9qxSGNfXMqdJ/9LyMY=; Received: from localhost.lerctr.org ([127.0.0.1]:57968 helo=webmail.lerctr.org) by thebighonker.lerctr.org with esmtpa (Exim 4.80.1 (FreeBSD)) (envelope-from ) id 1UOC5P-00040v-2U; Fri, 05 Apr 2013 14:14:07 -0500 Received: from [32.97.110.60] by webmail.lerctr.org with HTTP (HTTP/1.1 POST); Fri, 05 Apr 2013 14:14:07 -0500 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Date: Fri, 05 Apr 2013 14:14:07 -0500 From: Larry Rosenman To: Matthew Ahrens Subject: Re: [CRASH] ZFS recv (fwd)/CURRENT In-Reply-To: References: <5159EF29.6000503@FreeBSD.org> <515B4CFA.9080706@FreeBSD.org> <9bc083a21a73839a0932514ea4f48d0d@webmail.lerctr.org> <515EFBD8.50900@FreeBSD.org> Message-ID: <54dd43a8fbd0fbed24c84152f2502471@webmail.lerctr.org> X-Sender: ler@lerctr.org User-Agent: Roundcube Webmail/0.8.5 X-Spam-Score: -5.3 (-----) X-LERCTR-Spam-Score: -5.3 (-----) X-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 X-LERCTR-Spam-Report: SpamScore (-5.3/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, RP_MATCHES_RCVD=-2.373 Cc: freebsd-fs , freebsd-current@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 19:14:08 -0000 On 2013-04-05 14:07, Matthew Ahrens wrote: > I am working on integrating a fix into illumos.  FYI, the patch you > attached will not quite work, because you are using different tags to > hold and release the dataset (FTAG is a macro which has a > calling-function-specific value). > > --matt > Might this also be part of the invalid datastream issue I was seeing? Is there anything I can provide to help Illumos with this issue? Thanks! -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893 From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 21:13:05 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 5DB04290 for ; Fri, 5 Apr 2013 21:13:05 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vps.rulingia.com (host-122-100-2-194.octopus.com.au [122.100.2.194]) by mx1.freebsd.org (Postfix) with ESMTP id D3B42DD2 for ; Fri, 5 Apr 2013 21:13:04 +0000 (UTC) Received: from server.rulingia.com (c220-239-237-213.belrs5.nsw.optusnet.com.au [220.239.237.213]) by vps.rulingia.com (8.14.5/8.14.5) with ESMTP id r35LCtmQ046740 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sat, 6 Apr 2013 08:12:56 +1100 (EST) (envelope-from peter@rulingia.com) X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.14.5/8.14.5) with ESMTP id r35LCnVi049996 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 6 Apr 2013 08:12:49 +1100 (EST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.14.5/8.14.5/Submit) id r35LCnWB049995; Sat, 6 Apr 2013 08:12:49 +1100 (EST) (envelope-from peter) Date: Sat, 6 Apr 2013 08:12:49 +1100 From: Peter Jeremy To: Joar Jegleim Subject: Re: Regarding regular zfs Message-ID: <20130405211249.GB31958@server.rulingia.com> References: MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="82I3+IH0IqGh5yIs" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 21:13:05 -0000 --82I3+IH0IqGh5yIs Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2013-Apr-05 12:17:27 +0200, Joar Jegleim wrote: >I've got this script that initially zfs send's a whole zfs volume, and >for every send after that only sends the diff . So after the initial zfs >send, the diff's usually take less than a minute to send over. Are you deleting old snapshots after the newer snapshots have been sent? >I've had increasing problems on the 'slave', it seem to grind to a >halt for anything between 5-20 seconds after every zfs receive . Everything >on the server halts / hangs completely. Can you clarify which machine you mean by server in the last line above. I presume you mean the slave machine running "zfs recv". If you monitor the "server" with "vmstat -v 1", "gstat -a" and "zfs-mon -a" (the latter is part of ports/sysutils/zfs-stats) during the "freeze", what do you see? Are the disks saturated or idle? Are the "cache" or "free" values close to zero? ># 16GB arc_max ( server got 30GB of ram, but had a couple 'freeze' >situations, suspect zfs.arc ate too much memory) There was a bug in interface between ZFS ARC and FreeBSD VM that resulted in ARC starvation. This was fixed between 8.2 and 8.3/9.0. >I suspect it may have something to do with the zfs volume being sent >is mount'ed on the slave, and I'm also doing the backups from the >slave, which means a lot of the time the backup server is rsyncing the >zfs volume being updated. Do you have atime enabled or disabled? What happens when you don't run rsync at the same time? Are you able to break into DDB? >In my setup have I taken the use case for zfs send / receive too far >(?) as in, it's not meant for this kind of syncing and this often, so >there's actually nothing 'wrong'. Apart from the rsync whilst receiving, everything sounds OK. It's possible that the rsync whilst receiving is triggering a bug. --=20 Peter Jeremy --82I3+IH0IqGh5yIs Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (FreeBSD) iEYEARECAAYFAlFfPlEACgkQ/opHv/APuIdhDACdH8TJwA++wALt80XjP5nH0bSl wngAnRFGty1FAplmb4kFndp89nFjTXQK =CMl0 -----END PGP SIGNATURE----- --82I3+IH0IqGh5yIs-- From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 22:13:10 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id E1FBEF6F for ; Fri, 5 Apr 2013 22:13:10 +0000 (UTC) (envelope-from lkchen@k-state.edu) Received: from ksu-out.merit.edu (ksu-out.merit.edu [207.75.117.132]) by mx1.freebsd.org (Postfix) with ESMTP id AE7A8FD4 for ; Fri, 5 Apr 2013 22:13:09 +0000 (UTC) X-Merit-ExtLoop1: 1 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgMFAIBKX1HPS3TT/2dsb2JhbABIAxaCcDaDKb8IFnSCHwEBBSNWDAINGgINGQIdPAYTiBQMrzCJQ4kRBIEfjDaBWoIcgRMDp3uDJ4FXNQ X-IronPort-AV: E=Sophos;i="4.87,417,1363147200"; d="scan'208";a="212527876" X-MERIT-SOURCE: KSU Received: from ksu-sfpop-mailstore02.merit.edu ([207.75.116.211]) by sfpop-ironport07.merit.edu with ESMTP; 05 Apr 2013 18:09:27 -0400 Date: Fri, 5 Apr 2013 18:09:26 -0400 (EDT) From: "Lawrence K. Chen, P.Eng." To: Quartz Message-ID: <1964862508.3535448.1365199766508.JavaMail.root@k-state.edu> In-Reply-To: <514F5AD5.8000006@sneakertech.com> Subject: Re: ZFS: Failed pool causes system to hang MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [129.130.0.181] X-Mailer: Zimbra 7.2.2_GA_2852 (ZimbraWebClient - GC25 ([unknown])/7.2.2_GA_2852) Cc: FreeBSD FS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 22:13:10 -0000 So, this thread seems to just stop....and can't see if it was resolved or not. Anyways, my input would be did you want long enough to see if the system will boot before declaring it hung? I've had my system crash at bad times, which has resulted in the appearance that the boot is hung...but its busy churning away.... Recently, I was destroying a large dataset on a raidz pool when the system had crashed. It seemed hung at trying to mount root....but it was churning away....and since I had it eventually booted after a similar incident. I left the system alone....and after almost 24 hours it finally did finally finish booting and everything was fine. Supposedly there's been a fix to make the destroy of large datasets faster...hopefully that'll make it in soon. OTOH, I had a system that corrupted the zpool so bad that it would panic when importing it....though I was able to import the pool read-only....so I was able to recover files and then start over again. The second time, it got corrupted that it wouldn't import under any condition. So, I had to start over from scratch. Eventually, tracked it down to a bad DIMM. ----- Original Message ----- > > > Sure, there are bugs that inhibit the ability to reboot from > > command-line. We shouldn't shrug them off, but we should also > > acknowledge that the software is very complicated, and rooting out > > such bugs takes time. Plus, this is a volunteer project. > > No, I'm not trying to say that I expect something as complicated as a > whole OS to be bug free, and I do well recognize that it's no one's > obligation to fix anything anyway.... I'm just saying that starting > an > argument about the remote-ness of the reboot is missing the point. > > > > Worst case, you have IP-addressable PDUs > > That's what we usually go with- guaranteed to work with any hardware. > That plus a vnc capable kvm. > -- Who: Lawrence K. Chen, P.Eng. - W0LKC - Senior Unix Systems Administrator For: Enterprise Server Technologies (EST) -- & SafeZone Ally Snail: Computing and Telecommunications Services (CTS) Kansas State University, 109 East Stadium, Manhattan, KS 66506-3102 Phone: (785) 532-4916 - Fax: (785) 532-3515 - Email: lkchen@ksu.edu Web: http://www-personal.ksu.edu/~lkchen - Where: 11 Hale Library From owner-freebsd-fs@FreeBSD.ORG Fri Apr 5 23:41:23 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id D4EABC5D for ; Fri, 5 Apr 2013 23:41:23 +0000 (UTC) (envelope-from beastie@tardisi.com) Received: from mho-02-ewr.mailhop.org (mho-02-ewr.mailhop.org [204.13.248.72]) by mx1.freebsd.org (Postfix) with ESMTP id B25E1292 for ; Fri, 5 Apr 2013 23:41:23 +0000 (UTC) Received: from ip70-179-144-108.fv.ks.cox.net ([70.179.144.108] helo=zen.lhaven.homeip.net) by mho-02-ewr.mailhop.org with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.72) (envelope-from ) id 1UOGFx-0001OV-7Q; Fri, 05 Apr 2013 23:41:17 +0000 X-Mail-Handler: Dyn Standard SMTP by Dyn X-Originating-IP: 70.179.144.108 X-Report-Abuse-To: abuse@dyndns.com (see http://www.dyndns.com/services/sendlabs/outbound_abuse.html for abuse reporting information) X-MHO-User: U2FsdGVkX18Ue4KAmqjSAHa+6H7RhjbPSCPQ2FIvkSc= Message-ID: <515F611B.30000@tardisi.com> Date: Fri, 05 Apr 2013 18:41:15 -0500 From: The BSD Dreamer User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4 MIME-Version: 1.0 To: dennis berger Subject: Re: ZFS in production enviroments References: <4BC15B7B-4893-4167-ACF0-1CB066DE4EE3@nipsi.de> In-Reply-To: <4BC15B7B-4893-4167-ACF0-1CB066DE4EE3@nipsi.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Apr 2013 23:41:23 -0000 I presume the question is ZFS on FreeBSD in production environments, rather than ZFS in general or ZFS on something else.... ....something else being ZFS native for Linux work...which I have played with now and then, but so far haven't had enough success with it to consider it for long term use, let along production. But, at work we've being using ZFS on Solaris in production ever since it came out, there were some problems in our initial systems, where we did have Sun engineers out to fiddle with things like ZIL....or something in the OS about getting interrupts to spread out to other threads on the core (coolthreads). We like ZFS so much that it opened the door for FreeBSD. So, we have a couple of production systems a FreeBSD 9.0 system with a raidz2 zroot consisting of 5 2TB drives and a FreeBSD 9.1 system with a raidz1 zroot consisting of 6 2TB drives. There are plans to have more in the future... as we move to Solaris systems are for things that have to be on Solaris (like Oracle databases.) Though we are also looking at SmartOS and OmniOS. SmartOS for where we want it to run the hardware that we're doing KVMs on, and there are reasons for not using our vSphere environment. Or, OmniOS for where we want to run native Solaris applications, but don't need/want to pay the premium of Oracle hardware and Oracle support.....the application is available for either Solaris or Linux, and we very much want ZFS.... I've thought about playing with these on my own...but don't have any systems that meet the hardware requirements. (EPT) -- Name: Lawrence "The Dreamer" Chen Email: beastie@tardisi.com Snail: 1530 College Ave, A5 Blog: http://lawrencechen.net Manhattan, KS 66502-2768 Phone: 785-789-4132 From owner-freebsd-fs@FreeBSD.ORG Sat Apr 6 14:44:30 2013 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 201EE593; Sat, 6 Apr 2013 14:44:30 +0000 (UTC) (envelope-from bfriesen@simple.dallas.tx.us) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) by mx1.freebsd.org (Postfix) with ESMTP id DDD73FC6; Sat, 6 Apr 2013 14:44:29 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id r36EiLOG020559; Sat, 6 Apr 2013 09:44:21 -0500 (CDT) Date: Sat, 6 Apr 2013 09:44:21 -0500 (CDT) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: "Kenneth D. Merry" Subject: Re: NFS File Handle Affinity ported to new NFS server In-Reply-To: <20130404194719.GA79482@nargothrond.kdm.org> Message-ID: References: <20130404194719.GA79482@nargothrond.kdm.org> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Sat, 06 Apr 2013 09:44:21 -0500 (CDT) Cc: fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Apr 2013 14:44:30 -0000 On Thu, 4 Apr 2013, Kenneth D. Merry wrote: > Hi folks, > > I have ported the old NFS server's File Handle Affinity (FHA) code so that > it works with both the old and new NFS servers. > > This sped up sequential reads from ZFS very significantly in my test > scenarios. > > e.g. a single stream read off of an 8-drive RAIDZ2 went from about 75MB/sec > to over 200MB/sec. > > And with 7 read streams from 7 Linux clients coming off of a 36-drive > RAID-10, I went from about 700-800MB/sec to 1.7GB/sec. (This is over > 10Gb ethernet, with 2 aggregated 10Gb ports on the server end.) > > The reason for the speedup is that Linux is doing a lot of prefetching, and > those read requests all wound up going to separate threads in the NFS > server. That confused the ZFS read prefetch code, and caused it to throw > away a lot of data. This seems like a huge improvement. It also raises questions for me. I have been complaining about single-file zfs prefetch performance (to slow of a prefetch ramp) by user-space applications on the Illumos zfs list. Does the underlying zfs code actually know about requesting "threads" rather than an open file handle? If multiple threads read in turn from the same user-space open file handle, will the prefetch become confused due to use of multiple reading threads? Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Sat Apr 6 21:59:23 2013 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id B31A3C2C; Sat, 6 Apr 2013 21:59:23 +0000 (UTC) (envelope-from john@theusgroup.com) Received: from theusgroup.com (theusgroup.com [64.122.243.222]) by mx1.freebsd.org (Postfix) with ESMTP id 9E1C42AA; Sat, 6 Apr 2013 21:59:23 +0000 (UTC) From: John Theus To: Martin Matuska Subject: Re: [CFT] libzfs_core for 9-STABLE In-reply-to: <514C50D6.9080302@FreeBSD.org> References: <514C50D6.9080302@FreeBSD.org> Comments: In-reply-to Martin Matuska message dated "Fri, 22 Mar 2013 13:38:46 +0100." Date: Sat, 06 Apr 2013 14:50:41 -0700 Message-Id: <20130406215041.77E9EDC1@server.theusgroup.com> Cc: freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Apr 2013 21:59:23 -0000 >Hello all, > >libzfs_core and the rewritten locking code around dsl_sync_dataset >have been commited to -HEAD: >http://svnweb.freebsd.org/changeset/base/248571 > >The scheduled merge date to 9-STABLE is around Apr 21, 2013. > >Early adopters can test new code by applying the following patch >(against stable/9 r248611): >http://people.freebsd.org/~mm/patches/zfs/stable-9-248611-lzc.patch.gz > >Steps to apply to a clean checked-out source: >cd /path/to/src >patch -p0 < /path/to/stable-9-248611-lzc.patch > >Alternatively you can download a pre-compiled amd64 mfsBSD image for testing: >(see http://mfsbsd.vx.sk for more information on mfsBSD) >http://mfsbsd.vx.sk/files/testing/ > >I am primarily interested in the following areas of feedback: >- stability >- backward compatibility (new kernel, old utilities) > >Feedback and suggestions are welcome. > >-- >Martin Matuska >FreeBSD committer >http://blog.vx.sk I upgraded both kernel and world on a machine that was running 9.1-STABLE r248385 to the above: 9.1-STABLE #1 r248619M: Mon Mar 25 14:16:45 PDT 2013 This machine is used for zfs send/recv backups and it has been stable since the upgrade. By a chance typo, I found a problem in zfs release which doesn't exist on an unpatched machine. To reproduce just do the following: # zfs hold test filesystem@snap # zfs release test filesystem@snap1 # snap1 does not exist # (no error message produced) # zfs release test filesystem@snap # zfs release test filesystem@snap internal error: No such process [1] 857 abort zfs release test # John Theus TheUsGroup.com From owner-freebsd-fs@FreeBSD.ORG Sat Apr 6 23:06:00 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 9CA087CD; Sat, 6 Apr 2013 23:06:00 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 7578B693; Sat, 6 Apr 2013 23:06:00 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r36N60gR003142; Sat, 6 Apr 2013 23:06:00 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r36N60df003141; Sat, 6 Apr 2013 23:06:00 GMT (envelope-from linimon) Date: Sat, 6 Apr 2013 23:06:00 GMT Message-Id: <201304062306.r36N60df003141@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/177658: [ufs] FreeBSD panics after get full filesystem with ufs snapshot X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Apr 2013 23:06:00 -0000 Old Synopsis: FreeBSD panics after get full filesystem with ufs snapshot New Synopsis: [ufs] FreeBSD panics after get full filesystem with ufs snapshot Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sat Apr 6 23:05:32 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=177658 From owner-freebsd-fs@FreeBSD.ORG Sat Apr 6 23:23:58 2013 Return-Path: Delivered-To: freebsd-fs@smarthost.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id D15F4E06; Sat, 6 Apr 2013 23:23:58 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id AAB6D732; Sat, 6 Apr 2013 23:23:58 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.6/8.14.6) with ESMTP id r36NNwIJ007845; Sat, 6 Apr 2013 23:23:58 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.6/8.14.6/Submit) id r36NNwo1007844; Sat, 6 Apr 2013 23:23:58 GMT (envelope-from linimon) Date: Sat, 6 Apr 2013 23:23:58 GMT Message-Id: <201304062323.r36NNwo1007844@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Subject: Re: kern/172942: [smbfs] Unmounting a smb mount when the server became unavailable causes kernel panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Apr 2013 23:23:58 -0000 Old Synopsis: Unmounting a smb mount when the server became unavailable causes kernel panic New Synopsis: [smbfs] Unmounting a smb mount when the server became unavailable causes kernel panic Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Sat Apr 6 23:23:45 UTC 2013 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=172942