From owner-freebsd-fs@FreeBSD.ORG Sun Jun 26 21:35:38 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AE94F1065670 for ; Sun, 26 Jun 2011 21:35:38 +0000 (UTC) (envelope-from bsd@vink.pl) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 3FF838FC0A for ; Sun, 26 Jun 2011 21:35:37 +0000 (UTC) Received: by bwa20 with SMTP id 20so1272259bwa.13 for ; Sun, 26 Jun 2011 14:35:37 -0700 (PDT) Received: by 10.204.16.70 with SMTP id n6mr3616881bka.87.1309124135998; Sun, 26 Jun 2011 14:35:35 -0700 (PDT) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx.google.com with ESMTPS id af13sm3578590bkc.19.2011.06.26.14.35.35 (version=SSLv3 cipher=OTHER); Sun, 26 Jun 2011 14:35:35 -0700 (PDT) Received: by bwa20 with SMTP id 20so1272234bwa.13 for ; Sun, 26 Jun 2011 14:35:34 -0700 (PDT) MIME-Version: 1.0 Received: by 10.204.130.19 with SMTP id q19mr125410bks.0.1309124134869; Sun, 26 Jun 2011 14:35:34 -0700 (PDT) Received: by 10.204.120.66 with HTTP; Sun, 26 Jun 2011 14:35:34 -0700 (PDT) In-Reply-To: <4E0435B6.30004@fsn.hu> References: <4E0435B6.30004@fsn.hu> Date: Sun, 26 Jun 2011 23:35:34 +0200 Message-ID: From: Wiktor Niesiobedzki To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: Subject: Re: ZFS L2ARC hit ratio X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 26 Jun 2011 21:35:38 -0000 2011/6/24 Attila Nagy : > On 06/21/11 21:59, Wiktor Niesiobedzki wrote: >> >> I've recently migrated my 8.2 box to recent stable: >> FreeBSD kadlubek.vink.pl 8.2-STABLE FreeBSD 8.2-STABLE #22: Tue Jun =C2= =A07 >> 03:43:29 CEST 2011 =C2=A0 =C2=A0 root@kadlubek:/usr/obj/usr/src/sys/KADL= UB =C2=A0i386 >> >> And upgraded my ZFS/ZPOOL to newest versions. Though through my >> monitoring I've noticed some declination in L2ARC hit ratio (server is >> not busy, so it doesn't look that suspicious). I've made some tests >> today and I guess, that there might be some problem: > > Likely because vfs.zfs.l2arc_noprefetch is now 1. > > I'm running on i386 so prefetch for me was always disabled. It turns out, that the "bad" behaviour is a result of putting: vfs.zfs.prefetch_disable=3D0 In loader.conf I thought that this is somehow better in ZFS v28, than in v15. Anyhow, it looks like prefetching shall be left disabled for now. Cheers, Wiktor Niesiobedzki From owner-freebsd-fs@FreeBSD.ORG Mon Jun 27 11:07:01 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 533B5106566B for ; Mon, 27 Jun 2011 11:07:01 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 422E38FC1A for ; Mon, 27 Jun 2011 11:07:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id p5RB71Io071816 for ; Mon, 27 Jun 2011 11:07:01 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id p5RB70OM071814 for freebsd-fs@FreeBSD.org; Mon, 27 Jun 2011 11:07:00 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 27 Jun 2011 11:07:00 GMT Message-Id: <201106271107.p5RB70OM071814@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2011 11:07:01 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/158231 fs [nullfs] panic on unmounting nullfs mounted over ufs o o kern/157929 fs [nfs] NFS slow read o kern/157728 fs [zfs] zfs (v28) incremental receive may leave behind t o kern/157722 fs [geli] unable to newfs a geli encrypted partition o kern/157399 fs [zfs] trouble with: mdconfig force delete && zfs strip o kern/157179 fs [zfs] zfs/dbuf.c: panic: solaris assert: arc_buf_remov o kern/156933 fs [zfs] ZFS receive after read on readonly=on filesystem o kern/156797 fs [zfs] [panic] Double panic with FreeBSD 9-CURRENT and o kern/156781 fs [zfs] zfs is losing the snapshot directory, p kern/156545 fs [ufs] mv could break UFS on SMP systems o kern/156193 fs [ufs] [hang] UFS snapshot hangs && deadlocks processes o kern/156168 fs [nfs] [panic] Kernel panic under concurrent access ove o kern/156039 fs [nullfs] [unionfs] nullfs + unionfs do not compose, re o kern/155615 fs [zfs] zfs v28 broken on sparc64 -current o kern/155587 fs [zfs] [panic] kernel panic with zfs o kern/155411 fs [regression] [8.2-release] [tmpfs]: mount: tmpfs : No o kern/155199 fs [ext2fs] ext3fs mounted as ext2fs gives I/O errors o bin/155104 fs [zfs][patch] use /dev prefix by default when importing o kern/154930 fs [zfs] cannot delete/unlink file from full volume -> EN o kern/154828 fs [msdosfs] Unable to create directories on external USB o kern/154491 fs [smbfs] smb_co_lock: recursive lock for object 1 o kern/154447 fs [zfs] [panic] Occasional panics - solaris assert somew p kern/154228 fs [md] md getting stuck in wdrain state o kern/153996 fs [zfs] zfs root mount error while kernel is not located o kern/153847 fs [nfs] [panic] Kernel panic from incorrect m_free in nf o kern/153753 fs [zfs] ZFS v15 - grammatical error when attempting to u o kern/153716 fs [zfs] zpool scrub time remaining is incorrect o kern/153695 fs [patch] [zfs] Booting from zpool created on 4k-sector o kern/153680 fs [xfs] 8.1 failing to mount XFS partitions o kern/153520 fs [zfs] Boot from GPT ZFS root on HP BL460c G1 unstable o kern/153418 fs [zfs] [panic] Kernel Panic occurred writing to zfs vol o kern/153351 fs [zfs] locking directories/files in ZFS o bin/153258 fs [patch][zfs] creating ZVOLs requires `refreservation' s kern/153173 fs [zfs] booting from a gzip-compressed dataset doesn't w o kern/153126 fs [zfs] vdev failure, zpool=peegel type=vdev.too_small p kern/152488 fs [tmpfs] [patch] mtime of file updated when only inode o kern/152022 fs [nfs] nfs service hangs with linux client [regression] o kern/151942 fs [zfs] panic during ls(1) zfs snapshot directory o kern/151905 fs [zfs] page fault under load in /sbin/zfs o kern/151845 fs [smbfs] [patch] smbfs should be upgraded to support Un o bin/151713 fs [patch] Bug in growfs(8) with respect to 32-bit overfl o kern/151648 fs [zfs] disk wait bug o kern/151629 fs [fs] [patch] Skip empty directory entries during name o kern/151330 fs [zfs] will unshare all zfs filesystem after execute a o kern/151326 fs [nfs] nfs exports fail if netgroups contain duplicate o kern/151251 fs [ufs] Can not create files on filesystem with heavy us o kern/151226 fs [zfs] can't delete zfs snapshot o kern/151111 fs [zfs] vnodes leakage during zfs unmount o kern/150503 fs [zfs] ZFS disks are UNAVAIL and corrupted after reboot o kern/150501 fs [zfs] ZFS vdev failure vdev.bad_label on amd64 o kern/150390 fs [zfs] zfs deadlock when arcmsr reports drive faulted o kern/150336 fs [nfs] mountd/nfsd became confused; refused to reload n o kern/150207 fs zpool(1): zpool import -d /dev tries to open weird dev o kern/149208 fs mksnap_ffs(8) hang/deadlock o kern/149173 fs [patch] [zfs] make OpenSolaris installa o kern/149015 fs [zfs] [patch] misc fixes for ZFS code to build on Glib o kern/149014 fs [zfs] [patch] declarations in ZFS libraries/utilities o kern/149013 fs [zfs] [patch] make ZFS makefiles use the libraries fro o kern/148504 fs [zfs] ZFS' zpool does not allow replacing drives to be o kern/148490 fs [zfs]: zpool attach - resilver bidirectionally, and re o kern/148368 fs [zfs] ZFS hanging forever on 8.1-PRERELEASE o bin/148296 fs [zfs] [loader] [patch] Very slow probe in /usr/src/sys o kern/148204 fs [nfs] UDP NFS causes overload o kern/148138 fs [zfs] zfs raidz pool commands freeze o kern/147903 fs [zfs] [panic] Kernel panics on faulty zfs device o kern/147881 fs [zfs] [patch] ZFS "sharenfs" doesn't allow different " o kern/147790 fs [zfs] zfs set acl(mode|inherit) fails on existing zfs o kern/147560 fs [zfs] [boot] Booting 8.1-PRERELEASE raidz system take o kern/147420 fs [ufs] [panic] ufs_dirbad, nullfs, jail panic (corrupt o kern/146941 fs [zfs] [panic] Kernel Double Fault - Happens constantly o kern/146786 fs [zfs] zpool import hangs with checksum errors o kern/146708 fs [ufs] [panic] Kernel panic in softdep_disk_write_compl o kern/146528 fs [zfs] Severe memory leak in ZFS on i386 o kern/146502 fs [nfs] FreeBSD 8 NFS Client Connection to Server s kern/145712 fs [zfs] cannot offline two drives in a raidz2 configurat o kern/145411 fs [xfs] [panic] Kernel panics shortly after mounting an o bin/145309 fs bsdlabel: Editing disk label invalidates the whole dev o kern/145272 fs [zfs] [panic] Panic during boot when accessing zfs on o kern/145246 fs [ufs] dirhash in 7.3 gratuitously frees hashes when it o kern/145238 fs [zfs] [panic] kernel panic on zpool clear tank o kern/145229 fs [zfs] Vast differences in ZFS ARC behavior between 8.0 o kern/145189 fs [nfs] nfsd performs abysmally under load o kern/144929 fs [ufs] [lor] vfs_bio.c + ufs_dirhash.c p kern/144447 fs [zfs] sharenfs fsunshare() & fsshare_main() non functi o kern/144416 fs [panic] Kernel panic on online filesystem optimization s kern/144415 fs [zfs] [panic] kernel panics on boot after zfs crash o kern/144234 fs [zfs] Cannot boot machine with recent gptzfsboot code o kern/143825 fs [nfs] [panic] Kernel panic on NFS client o bin/143572 fs [zfs] zpool(1): [patch] The verbose output from iostat o kern/143212 fs [nfs] NFSv4 client strange work ... o kern/143184 fs [zfs] [lor] zfs/bufwait LOR o kern/142914 fs [zfs] ZFS performance degradation over time o kern/142878 fs [zfs] [vfs] lock order reversal o kern/142597 fs [ext2fs] ext2fs does not work on filesystems with real o kern/142489 fs [zfs] [lor] allproc/zfs LOR o kern/142466 fs Update 7.2 -> 8.0 on Raid 1 ends with screwed raid [re o kern/142306 fs [zfs] [panic] ZFS drive (from OSX Leopard) causes two o kern/142068 fs [ufs] BSD labels are got deleted spontaneously o kern/141897 fs [msdosfs] [panic] Kernel panic. msdofs: file name leng o kern/141463 fs [nfs] [panic] Frequent kernel panics after upgrade fro o kern/141305 fs [zfs] FreeBSD ZFS+sendfile severe performance issues ( o kern/141091 fs [patch] [nullfs] fix panics with DIAGNOSTIC enabled o kern/141086 fs [nfs] [panic] panic("nfs: bioread, not dir") on FreeBS o kern/141010 fs [zfs] "zfs scrub" fails when backed by files in UFS2 o kern/140888 fs [zfs] boot fail from zfs root while the pool resilveri o kern/140661 fs [zfs] [patch] /boot/loader fails to work on a GPT/ZFS- o kern/140640 fs [zfs] snapshot crash o kern/140134 fs [msdosfs] write and fsck destroy filesystem integrity o kern/140068 fs [smbfs] [patch] smbfs does not allow semicolon in file o kern/139725 fs [zfs] zdb(1) dumps core on i386 when examining zpool c o kern/139715 fs [zfs] vfs.numvnodes leak on busy zfs p bin/139651 fs [nfs] mount(8): read-only remount of NFS volume does n o kern/139597 fs [patch] [tmpfs] tmpfs initializes va_gen but doesn't u o kern/139564 fs [zfs] [panic] 8.0-RC1 - Fatal trap 12 at end of shutdo o kern/139407 fs [smbfs] [panic] smb mount causes system crash if remot o kern/138662 fs [panic] ffs_blkfree: freeing free block o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic p kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133174 fs [msdosfs] [patch] msdosfs must support multibyte inter o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130210 fs [nullfs] Error by check nullfs f kern/130133 fs [panic] [zfs] 'kmem_map too small' caused by make clea o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/127787 fs [lor] [ufs] Three LORs: vfslock/devfs/vfslock, ufs/vfs f kern/127375 fs [zfs] If vm.kmem_size_max>"1073741823" then write spee o bin/127270 fs fsck_msdosfs(8) may crash if BytesPerSec is zero o kern/127029 fs [panic] mount(8): trying to mount a write protected zi f kern/126703 fs [panic] [zfs] _mtx_lock_sleep: recursed on non-recursi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file o kern/125895 fs [ffs] [panic] kernel: panic: ffs_blkfree: freeing free s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS o kern/123939 fs [msdosfs] corrupts new files f sparc/123566 fs [zfs] zpool import issue: EOVERFLOW o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/120210 fs [zfs] [panic] reboot after panic: solaris assert: arc_ o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs [ufs] mv(1): moving a directory changes its mtime o kern/118126 fs [nfs] [patch] Poor NFS server write performance o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o kern/117954 fs [ufs] dirhash on very large directories blocks the mac o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o conf/116931 fs lack of fsck_cd9660 prevents mounting iso images with o kern/116583 fs [ffs] [hang] System freezes for short time when using o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] [iconv] mount_msdosfs: msdosfs_iconv: Operat o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106107 fs [ufs] left-over fsck_snapshot after unfinished backgro o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes s bin/97498 fs [request] newfs(8) has no option to clear the first 12 o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [cd9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o bin/94810 fs fsck(8) incorrectly reports 'file system marked clean' o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88555 fs [panic] ffs_blkfree: freeing free frag on AMD 64 o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o bin/87966 fs [patch] newfs(8): introduce -A flag for newfs to enabl o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o bin/85494 fs fsck_ffs: unchecked use of cg_inosused macro etc. o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o bin/74779 fs Background-fsck checks one filesystem twice and omits o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o bin/70600 fs fsck(8) throws files away when it can't grow lost+foun o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/33464 fs [ufs] soft update inconsistencies after system crash o bin/27687 fs fsck(8) wrapper is not properly passing options to fsc o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 232 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Jun 27 11:18:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 19B381065724 for ; Mon, 27 Jun 2011 11:18:28 +0000 (UTC) (envelope-from inyaoo@gmail.com) Received: from mail-fx0-f44.google.com (mail-fx0-f44.google.com [209.85.161.44]) by mx1.freebsd.org (Postfix) with ESMTP id CE8D58FC2D for ; Mon, 27 Jun 2011 11:18:25 +0000 (UTC) Received: by fxe6 with SMTP id 6so1359023fxe.17 for ; Mon, 27 Jun 2011 04:18:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:subject:date:message-id:user-agent :mime-version:content-type; bh=0XKihE/h8aUNoCRPP4U5quhtTXd16dNDuj+Rg29nJec=; b=KiVY75/+lHMd+LvKIUP1ShiOq2udaSXmIEWpPSJTVCMPA9+pxApwsmpMZEfmxkf1F1 b++9+YpqnoNNSPpUFSEAaYh1bNCwWg39/hfxQpobG9vsPUIFaz6P71JGWyVYap/e2fmk EGME5cMqsUqXWNkx3M8j4IO+pF8/djFQv3b4E= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:subject:date:message-id:user-agent:mime-version :content-type; b=T5zFB+k6dByfNK5iKUgsHQJSMoh7A+mOQrMtWzQR4Q5B+21PG34tkBfvcK0c/9BL9f QiC5doBfKcggYEWy2L6qwiUJayooP8dTtMRqWc+jAniF6EwhDIaL2J5u4HXaMy1lmmNt tKfriug1LCU8cbZYQLUwREJ2Yx0j2w6d0+500= Received: by 10.223.13.13 with SMTP id z13mr4165794faz.114.1309171823224; Mon, 27 Jun 2011 03:50:23 -0700 (PDT) Received: from localhost (rockhall.torservers.net [77.247.181.163]) by mx.google.com with ESMTPS id p3sm2833360fan.45.2011.06.27.03.50.19 (version=SSLv3 cipher=OTHER); Mon, 27 Jun 2011 03:50:21 -0700 (PDT) From: Pan Tsu To: freebsd-fs@freebsd.org Date: Mon, 27 Jun 2011 14:50:08 +0400 Message-ID: <86y60nlfa7.fsf@gmail.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.50 (berkeley-unix) MIME-Version: 1.0 Content-Type: text/plain Subject: vfs.zfs.write_limit_override is read-only on v28? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2011 11:18:30 -0000 After v28 import the sysctl changed from RW to RDTUN while dsl_pool_tempreserve_space() haven't changed since v13 import. Why? From owner-freebsd-fs@FreeBSD.ORG Mon Jun 27 18:40:04 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A556B106564A; Mon, 27 Jun 2011 18:40:04 +0000 (UTC) (envelope-from gad@FreeBSD.org) Received: from smtp7.server.rpi.edu (smtp7.server.rpi.edu [128.113.2.227]) by mx1.freebsd.org (Postfix) with ESMTP id 1B2D88FC0A; Mon, 27 Jun 2011 18:40:03 +0000 (UTC) Received: from gilead.netel.rpi.edu (gilead.netel.rpi.edu [128.113.124.121]) by smtp7.server.rpi.edu (8.13.1/8.13.1) with ESMTP id p5RIdqDH016103; Mon, 27 Jun 2011 14:39:53 -0400 Message-ID: <4E08CE78.1050207@FreeBSD.org> Date: Mon, 27 Jun 2011 14:39:52 -0400 From: Garance A Drosehn User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.9) Gecko/20100722 Eudora/3.0.4 MIME-Version: 1.0 To: mdf@FreeBSD.org References: <20101201091203.GA3933@tops> <20110104175558.GR3140@deviant.kiev.zoral.com.ua> <20110120124108.GA32866@tops.skynet.lt> <4E027897.8080700@FreeBSD.org> <20110623064333.GA2823@tops> <20110623081140.GQ48734@deviant.kiev.zoral.com.ua> <4E03B8C4.6040800@FreeBSD.org> <20110623222630.GU48734@deviant.kiev.zoral.com.ua> <4E04FC7F.6090801@FreeBSD.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Bayes-Prob: 0.0001 (Score 0) X-RPI-SA-Score: 1.50 (*) [Hold at 12.00] COMBINED_FROM,RATWARE_GECKO_BUILD X-CanItPRO-Stream: outgoing X-Canit-Stats-ID: Bayes signature not available X-Scanned-By: CanIt (www . roaringpenguin . com) on 128.113.2.227 Cc: freebsd-fs@FreeBSD.org, Robert Watson Subject: Re: [rfc] 64-bit inode numbers X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2011 18:40:04 -0000 On 6/24/11 6:21 PM, mdf@FreeBSD.org wrote: > On Fri, Jun 24, 2011 at 2:07 PM, Garance A Drosehn wrote: > >> The AFS cell at RPI has approximately 40,000 AFS volumes, and each >> volume should have it's own dev_t (IMO). >> >> Please realize that I do not mind if people felt that there was no >> need to increase the size of dev_t at this time, and that we should >> wait until we see more of a demand for increasing it. But given the >> project to increase the size of inode numbers, I thought this was a >> good time to also ask about dev_t. I ask about it every few years :-) >> > I don't see why 32 bits are anywhere close to becoming tight to > represent 40k unique values. Is there something wrong with how each > new dev_t is computed, that runs out of space quicker than this > implies? > > Thanks, > matthew The 40K values are just for the AFS volumes at RPI. AFS presents the entire world as a single filesystem, with the RPI cell as just one small part of that worldwide filesystem. The public CellServDB.master file lists 200 cells, where all of those cells would be available at the same time to any user sitting on a single machine which has AFS installed. And that's just the official public AFS cells. Organizations can (and do) have private AFS cells which are not part of the official public list. I mentioned the 40K volumes at RPI because someone said "I do not expect to see hundreds of thousands of mounts on a single system". My example was just to show that I can access 40 thousand AFS volumes in a single unix *command*, without even leaving RPI. That was not meant to show how many volumes are reachable under all of /afs. Also, it was really easy for me to come up with the number of AFS volumes in the RPI cell. I'd be reluctant to try and probe all of the publicly-reachable AFS cells to come up with a real number for how many AFS volumes there are in the world. (aside: actually there are more like 60K AFS volumes at RPI, but at least 20K of those are not readily accessible via unix commands, so I said 40K. And most users at RPI couldn't even access 40K of those AFS volumes, but I suspect I can because I'm an AFS admin) One reason RPI has so many AFS volumes is that each user has their own AFS volume for their home directory. Given the way AFS works, that is a very very reasonable thing to do. In fact, it'd almost be stupid to *not* give every user their own AFS volume. Now imagine the WWW, where every single http/www.place.tld/~username on the entire planet was on a different disk volume. And any single user on a single system can access any combination of those disk volumes within a single login session. The WWW is a world-wide web. AFS is meant as a world-wide distributed file system. When working on a world-wide scale, you hit larger numbers. I think that many people who have not worked with AFS keep thinking of it the same way they think of NFS, but AFS was designed with much larger-scale deployment in mind. Again, I don't mind if we don't wish to tackle a larger dev_t right now, and I definitely do not want the 64-bit ino_t project to get bogged down with a debate over a larger dev_t. But I have been working with OpenAFS for ten years now, and it is definitely true that a larger dev_t would be helpful for that specific filesystem. And it may be that some other solution would be even better, so I don't want to push this one too much. -- Garance Alistair Drosehn = gad@gilead.netel.rpi.edu Senior Systems Programmer or gad@freebsd.org Rensselaer Polytechnic Institute or drosih@rpi.edu From owner-freebsd-fs@FreeBSD.ORG Mon Jun 27 23:44:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A41BA106564A for ; Mon, 27 Jun 2011 23:44:16 +0000 (UTC) (envelope-from gosand1982@yahoo.com) Received: from nm3-vm1.bullet.mail.ne1.yahoo.com (nm3-vm1.bullet.mail.ne1.yahoo.com [98.138.91.53]) by mx1.freebsd.org (Postfix) with SMTP id 6729C8FC1B for ; Mon, 27 Jun 2011 23:44:16 +0000 (UTC) Received: from [98.138.90.52] by nm3.bullet.mail.ne1.yahoo.com with NNFMP; 27 Jun 2011 23:30:50 -0000 Received: from [98.138.87.10] by tm5.bullet.mail.ne1.yahoo.com with NNFMP; 27 Jun 2011 23:30:50 -0000 Received: from [127.0.0.1] by omp1010.mail.ne1.yahoo.com with NNFMP; 27 Jun 2011 23:30:50 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 469577.46038.bm@omp1010.mail.ne1.yahoo.com Received: (qmail 45749 invoked by uid 60001); 27 Jun 2011 23:30:50 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1309217450; bh=Zju11+sUkKL4/rYQxtbBpq3BC1dPBr/aOO0PgWOf+VQ=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=JT3T2Mg+iJ1sSgq8+GsO5RLyUG57fZ5uKy6xXPmkO/ttjCNN7A8Xxj34wjeWkC/bjlU+ZkKBojlc2Sd2dYArQZ7ginQ10px3sxKiTBsB+aPkw8v0dycEHDcR8l9xuJIPCqCIm1UfhvVi5lMx/T+ZpBIgcVLSrt9LJuMkIB8cnWQ= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=e/VBA3i8b/g5vjs7B4CLsb02rl762DhVr5uB9yKffFMY/68F9/SQNEookibvsH10rtUBAfubU/eegKlUP1N430kUKRftub9TqU0sMS4aWy3ARUDhFNa7HYZHwH6dgUnSye5O5ZxGM+ECGZDixaq/iyVLknw8c3mLuUIlQ8Ld9ik=; X-YMail-OSG: qcBcqzwVM1lPbZZPNDFr1dDc1hDw0ihWVQd77R9scqkWYNF nU62GG9dOwQ4qi5yIgluf22oe4Jolklcem80fDSircqcrzq9jOWf2catp_xz O3gO8Kjdk4Vj7W.jkEMjoj67CfVuyjvQEaRbKLJ2j36hFeny3u5olJSwh6p3 d3sVjIx0KI5op3NFCHRXaRr1JCPtGI2aWhVJ69OXMUrKtSBJpePYm1NnpGWB ELS3oLurO8N1HKqW.xNH9xXxIol0ALCoMDINQAUvarxyZxBrlSsSqmgjUtSJ u7SmITLrTFW5_NcnaFu3_qFf00Ef7t.bJ6zwTwadsqX6J5Orc8xc- Received: from [173.164.238.34] by web120014.mail.ne1.yahoo.com via HTTP; Mon, 27 Jun 2011 16:30:50 PDT X-Mailer: YahooMailRC/572 YahooMailWebService/0.8.112.307740 Message-ID: <1309217450.43651.YahooMailRC@web120014.mail.ne1.yahoo.com> Date: Mon, 27 Jun 2011 16:30:50 -0700 (PDT) From: George Sanders To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Improving old-fashioned UFS2 performance with lots of inodes... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Jun 2011 23:44:16 -0000 I have a very old-fashioned file server running a 12-disk raid6 array on a 3ware 9650SE. 2TB disks, so the size comes out to 18TB. I newfs the raw device with: newfs -i 65535 /dev/xxx and I would consider jumping to 131072 ... that way my fsck should not take any longer than it would with a smaller disk, since there are not any more total inodes. BUT ... with over 100 million inodes on the filesystem, things go slow. Overall throughput is fine, and I have no complaints there, but doing any kind of operations with the files is quite slow. Building a file list with rsync, or doing a cp, or a ln -s of a big dir tree, etc. Let's assume that the architecture is not changing ... it's going to be FreeBSD 8.x, using UFS2, and raid6 on actual spinning (7200rpm) disks. What can I do to speed things up ? Right now I have these in my loader.conf: kern.maxdsiz="4096000000"# for fsck vm.kmem_size="1610612736"# for big rsyncs vm.kmem_size_max="1610612736"# for big rsyncs and I also set: vfs.ufs.dirhash_maxmem=64000000 but that's it. What bugs me is, the drives have 64M cache, and the 3ware controller has 224 MB (or so) but the system itself has 64 GB of RAM ... is there no way to use the RAM to increase performance ? I don't see a way to actually throw hardware resources at UFS2, other than faster disks which are uneconomical for this application ... Yes, 3ware write cache is turned on, and storsave is set to "balanced". Is there anything that can be done ? From owner-freebsd-fs@FreeBSD.ORG Tue Jun 28 01:08:31 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3C6C91065694 for ; Tue, 28 Jun 2011 01:08:31 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta10.westchester.pa.mail.comcast.net (qmta10.westchester.pa.mail.comcast.net [76.96.62.17]) by mx1.freebsd.org (Postfix) with ESMTP id A40768FC14 for ; Tue, 28 Jun 2011 01:08:30 +0000 (UTC) Received: from omta23.westchester.pa.mail.comcast.net ([76.96.62.74]) by qmta10.westchester.pa.mail.comcast.net with comcast id 1Cva1h0051c6gX85AD8WUq; Tue, 28 Jun 2011 01:08:30 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta23.westchester.pa.mail.comcast.net with comcast id 1D8Q1h00t1t3BNj3jD8T3k; Tue, 28 Jun 2011 01:08:29 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 9E729102C19; Mon, 27 Jun 2011 18:08:22 -0700 (PDT) Date: Mon, 27 Jun 2011 18:08:22 -0700 From: Jeremy Chadwick To: George Sanders Message-ID: <20110628010822.GA41399@icarus.home.lan> References: <1309217450.43651.YahooMailRC@web120014.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1309217450.43651.YahooMailRC@web120014.mail.ne1.yahoo.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Improving old-fashioned UFS2 performance with lots of inodes... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2011 01:08:31 -0000 On Mon, Jun 27, 2011 at 04:30:50PM -0700, George Sanders wrote: > I have a very old-fashioned file server running a 12-disk raid6 array on a 3ware > 9650SE. 2TB disks, so the size comes out to 18TB. > > I newfs the raw device with: > > newfs -i 65535 /dev/xxx > > and I would consider jumping to 131072 ... that way my fsck should not take any > longer than it would with a smaller disk, since there are not any more total > inodes. > > BUT ... > > with over 100 million inodes on the filesystem, things go slow. Overall > throughput is fine, and I have no complaints there, but doing any kind of > operations with the files is quite slow. Building a file list with rsync, or > doing a cp, or a ln -s of a big dir tree, etc. > > Let's assume that the architecture is not changing ... it's going to be FreeBSD > 8.x, using UFS2, and raid6 on actual spinning (7200rpm) disks. > > What can I do to speed things up ? > > Right now I have these in my loader.conf: > > kern.maxdsiz="4096000000"# for fsck > vm.kmem_size="1610612736"# for big rsyncs > vm.kmem_size_max="1610612736"# for big rsyncs On what exact OS version? Please don't say "8.2", need to know 8.2-RELEASE, -STABLE, or what. You said "8.x" above, which is too vague. If 8.2-STABLE you should not be tuning vm.kmem_size_max at all, and you probably don't need to tune vm.kmem_size either. I also do not understand how vm.kmem_size would affect rsync, since rsync is a userland application. I imagine you'd want to adjust kern.maxdsiz and kern.dfldsiz (default dsiz). > and I also set: > > vfs.ufs.dirhash_maxmem=64000000 This tunable uses memory for a single directorie that has a huge amount of files in it; AFAIK it does not apply to "large directory structures" (as in directories within directories within directories). It's obvious you're just tinkering with random sysctls hoping to gain performance without really understanding what the sysctls do. :-) To see if you even need to increase that, try "sysctl -a | grep vfs.ufs.dirhash" and look at dirhash_mem vs. dirhash_maxmem, as well as dirhash_lowmemcount. > but that's it. > > What bugs me is, the drives have 64M cache, and the 3ware controller has 224 MB > (or so) but the system itself has 64 GB of RAM ... is there no way to use the > RAM to increase performance ? I don't see a way to actually throw hardware > resources at UFS2, other than faster disks which are uneconomical for this > application ... > > Yes, 3ware write cache is turned on, and storsave is set to "balanced". > > Is there anything that can be done ? The only thing I can think of on short notice is to have multiple filesystems (volumes) instead of one large 12TB one. This is pretty common in the commercial filer world. Regarding system RAM and UFS2: I have no idea, Kirk might have to comment on that. You could "make use" of system RAM for cache (ZFS ARC) if you were using ZFS instead of native UFS2. However, if the system has 64GB of RAM, you need to ask yourself why the system has that amount of RAM in the first place. For example, if the machine runs mysqld and is tuned to use a large amount of memory, you really don't ""have"" 64GB of RAM to play with, and thus wouldn't want mysqld and some filesystem caching model fighting over memory (e.g. paging/swapping). Overall my opinion is that you're making absolutely humongous filesystems and expecting the performance, fsck, etc. to be just like it would be for a 16MB filesystem. That isn't the case at all. ZFS may be more what you're looking for, especially since you're wanting to use system memory as a large filesystem/content cache. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Tue Jun 28 20:42:33 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 51508106564A for ; Tue, 28 Jun 2011 20:42:33 +0000 (UTC) (envelope-from gjb@onyx.glenbarber.us) Received: from glenbarber.us (onyx.glenbarber.us [199.48.134.227]) by mx1.freebsd.org (Postfix) with SMTP id 0313A8FC0C for ; Tue, 28 Jun 2011 20:42:32 +0000 (UTC) Received: (qmail 5003 invoked by uid 1001); 28 Jun 2011 16:32:28 -0400 Date: Tue, 28 Jun 2011 16:32:28 -0400 From: Glen Barber To: fs@FreeBSD.org Message-ID: <20110628203228.GA4957@onyx.glenbarber.us> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="qMm9M+Fa2AknHoGS" Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Subject: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2011 20:42:33 -0000 --qMm9M+Fa2AknHoGS Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi, I'd like to get some feedback on a change I made to 404.status-zfs. I added a default behavior to list the pools on the system, in addition to checking if the pool is healthy. I think it might be useful for others to have this as the default behavior, for example on systems where dedup is enabled to track the dedup statistics over time. The output of the the script after my changes follows: Checking status of zfs pools: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zroot 456G 147G 309G 32% 1.00x ONLINE - zstore 928G 258G 670G 27% 1.00x ONLINE - all pools are healthy Feedback would be appreciated. A diff is attached. Regards, -- Glen Barber | gjb@FreeBSD.org FreeBSD Documentation Project --qMm9M+Fa2AknHoGS Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="404.status-zfs.diff.txt" Index: 404.status-zfs =================================================================== --- 404.status-zfs (revision 223645) +++ 404.status-zfs (working copy) @@ -16,12 +16,14 @@ echo echo 'Checking status of zfs pools:' - out=`zpool status -x` - echo "$out" + lout=`zpool list` + echo "$lout" + sout=`zpool status -x` + echo "$sout" # zpool status -x always exits with 0, so we have to interpret its # output to see what's going on. - if [ "$out" = "all pools are healthy" \ - -o "$out" = "no pools available" ]; then + if [ "$sout" = "all pools are healthy" \ + -o "$sout" = "no pools available" ]; then rc=0 else rc=1 --qMm9M+Fa2AknHoGS-- From owner-freebsd-fs@FreeBSD.ORG Tue Jun 28 23:47:28 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0EE71106564A for ; Tue, 28 Jun 2011 23:47:28 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta06.emeryville.ca.mail.comcast.net (qmta06.emeryville.ca.mail.comcast.net [76.96.30.56]) by mx1.freebsd.org (Postfix) with ESMTP id E85F78FC0C for ; Tue, 28 Jun 2011 23:47:27 +0000 (UTC) Received: from omta01.emeryville.ca.mail.comcast.net ([76.96.30.11]) by qmta06.emeryville.ca.mail.comcast.net with comcast id 1bkw1h0030EPchoA6bnRo3; Tue, 28 Jun 2011 23:47:25 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta01.emeryville.ca.mail.comcast.net with comcast id 1bnf1h01F1t3BNj8MbnhhJ; Tue, 28 Jun 2011 23:47:42 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 99FBD102C19; Tue, 28 Jun 2011 16:47:23 -0700 (PDT) Date: Tue, 28 Jun 2011 16:47:23 -0700 From: Jeremy Chadwick To: George Sanders Message-ID: <20110628234723.GA63965@icarus.home.lan> References: <1309217450.43651.YahooMailRC@web120014.mail.ne1.yahoo.com> <20110628010822.GA41399@icarus.home.lan> <1309302840.88674.YahooMailRC@web120004.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1309302840.88674.YahooMailRC@web120004.mail.ne1.yahoo.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Improving old-fashioned UFS2 performance with lots of inodes... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2011 23:47:28 -0000 On Tue, Jun 28, 2011 at 04:14:00PM -0700, George Sanders wrote: > > > with over 100 million inodes on the filesystem, things go slow. Overall > > > throughput is fine, and I have no complaints there, but doing any kind of > > > operations with the files is quite slow. Building a file list with rsync, or > > > doing a cp, or a ln -s of a big dir tree, etc. > > > > > > Let's assume that the architecture is not changing ... it's going to be FreeBSD > > > > > > 8.x, using UFS2, and raid6 on actual spinning (7200rpm) disks. > > > > > > What can I do to speed things up ? > > > > > > Right now I have these in my loader.conf: > > > > > > kern.maxdsiz="4096000000"# for fsck > > > vm.kmem_size="1610612736"# for big rsyncs > > > vm.kmem_size_max="1610612736"# for big rsyncs > > > > On what exact OS version? Please don't say "8.2", need to know > > 8.2-RELEASE, -STABLE, or what. You said "8.x" above, which is too > > vague. If 8.2-STABLE you should not be tuning vm.kmem_size_max at all, > > and you probably don't need to tune vm.kmem_size either. > > Ok, right now we are on 6.4-RELEASE, but it is our intention to move to > 8.2-RELEASE. Oh dear. I would recommend you focus solely on the complexity and pains of that upgrade and not about the "filesystem situation" here. The last thing you need to do is to try and "work in" some optimisations or tweaks while moving ahead by two major version releases. Take baby steps in this situation, otherwise there's going to be a mail about "problems with the upgrade but is it related to this tuning stuff we did or the filesystem problem or what happened and who changed what?" and you'll quickly lose track of everything. Re-visit the issue with UFS2 *after* you have done the upgrade. > If the kmem loader.conf options are no longer relevant in 8.2-STABLE, should I > assume that will also be the case when 8.3-RELEASE comes along ? Correct. > > I also do not understand how vm.kmem_size would affect rsync, since > > rsync is a userland application. I imagine you'd want to adjust > > kern.maxdsiz and kern.dfldsiz (default dsiz). > > Well, a huge rsync with 20+ million files dies with memory related errors, and > continued to do so until we upped the kmem values that high. We don't know > why, but we know it "fixed it". Again: I don't understand how adjusting vm.kmem_size or kmem_size_max would fix anything in regards to this. However, adjusting kern.maxdsiz I could see affecting this. It would indicate your rsync process becomes extremely large in size and exceeds maxdsiz, resulting in a segfault or some other anomalies sigN error. > > > and I also set: > > > > > > vfs.ufs.dirhash_maxmem=64000000 > > > > This tunable uses memory for a single directorie that has a huge amount > > of files in it; AFAIK it does not apply to "large directory structures" > > (as in directories within directories within directories). It's obvious > > you're just tinkering with random sysctls hoping to gain performance > > without really understanding what the sysctls do. :-) To see if you > > even need to increase that, try "sysctl -a | grep vfs.ufs.dirhash" and > > look at dirhash_mem vs. dirhash_maxmem, as well as dirhash_lowmemcount. > > No, we actually ALSO have huge directories, and we do indeed need this value. > > This is the one setting that we actually understand and have empirically > measured. Understood. > > The only thing I can think of on short notice is to have multiple > > filesystems (volumes) instead of one large 12TB one. This is pretty > > common in the commercial filer world. > > Ok, that is interesting - are you saying create multiple, smaller UFS > filesystems on the single large 12TB raid6 array ? Correct. Instead of one large 12TB filesystem, try four 3TB filesystems instead, or eight 2TB. > Or are you saying create a handful of smaller arrays ? We have to burn two > disks for every raid6 array we make, as I am sure you know, so we really can't split > it up into multiple arrays. Nah, not multiple arrays, just multiple filesystems on a single array. > We could, however, split the single raid6 array into multiple, formatted UFS2 > filesystems, but I don't understand how that would help with our performance ? > > Certainly fsck time would be much shorter, and we could bring up each filesystem > after it fsck'd, and then move to the next one ... but in terms of live performance, > how does splitting the array into multiple filesystems help ? The nature of a > raid array (as I understand it) would have us beating all 12 disks regardless of > which UFS filesystems were being used. > > Can you elaborate ? Please read everything I've written below before responding (e.g. do not respond in-line to this information). Actually, I think elaboration is needed on your part. :-) I say that with as much sincerity as possible. All you've stated in this thread so far is: - "With over 100 million inodes on the filesystem, things go slow" - "Building a list of files with rsync/using cp/ln -s in a very large directory tree" (does this mean a directory with a large amount of files in it?) "is slow" - Some sort of concern over the speed of fsck - You want to use more system memory/RAM for filesystem-level caching http://lists.freebsd.org/pipermail/freebsd-fs/2011-June/011867.html There's really nothing concrete provided here. Developers are going to need hard data, and I imagine you're going to get a lot of push-back given how you're using the filesystem. "Hard data" means you need to actually start showing some actual output of your filesystems, explain your directory structures, etc... Generally speaking, the below are No-Nos on most UNIX filesystems. At least these are things that I was taught very early on (early 90s), and I imagine others were as well: - Stick tons of files in a single directory - Cram hundreds of millions of files on a single filesystem I would recommend looking into tunefs(8) as well; the -e, -f, and -s arguments will probably interest you. Splitting things up into multiple filesystems would help with both the 1st and 3rd items on the 4-item list. Solving the 2nd item is as simple as: "then don't do that" (are you in biometrics per chance? Biometrics people have a tendency to abuse filesystems horribly :-) ), and the 4th item I can't really comment on (WRT UFS). Items 1, 3, and 4 are things that use of ZFS would help with. I'm not sure about the 2nd item. If I was in your situation, I would strongly recommend considering moving to it *after* you finish your OS upgrades. Furthermore, if you're going to consider using ZFS on FreeBSD, *please* use RELENG_8 (8.2-STABLE) and not RELENG_8_2 (8.2-RELEASE). There have been *major* improvements between those two tags. You can wait for 8.3-RELEASE if you want (which will obviously encapsulate those changes), but it's your choice. > > Regarding system RAM and UFS2: I have no idea, Kirk might have to > > comment on that. > > > > You could "make use" of system RAM for cache (ZFS ARC) if you were using > > ZFS instead of native UFS2. However, if the system has 64GB of RAM, you > > need to ask yourself why the system has that amount of RAM in the first > > place. For example, if the machine runs mysqld and is tuned to use a > > large amount of memory, you really don't ""have"" 64GB of RAM to play > > with, and thus wouldn't want mysqld and some filesystem caching model > > fighting over memory (e.g. paging/swapping). > > Actually, the system RAM is there for the purpose of someday using ZFS - and > for no other reason. However, it is realistically a few years away on our > timeline, > unfortunately, so for now we will use UFS2, and as I said ... it seems a shame > that UFS2 cannot use system RAM for any good purpose... > > Or can it ? Anyone ? Like I said: the only person (I know of) who could answer this would be Kirk McKusick. I'm not well-versed in the inner workings and design of filesystems; Kirk would be. I'm not sure who else "knows" UFS around here. I think you need to figure out which of your concerns have priority. Upgrading to ZFS (8.2-STABLE or later please) may solve all of your performance issues; I wish I could say "it will" but I can't. If upgrading to that isn't a priority (re: "a few years from now"), then you may have to live with your current situation, albeit painfully. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Tue Jun 28 23:14:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E42EF106566C for ; Tue, 28 Jun 2011 23:14:01 +0000 (UTC) (envelope-from gosand1982@yahoo.com) Received: from nm15-vm2.bullet.mail.ne1.yahoo.com (nm15-vm2.bullet.mail.ne1.yahoo.com [98.138.91.91]) by mx1.freebsd.org (Postfix) with SMTP id 9C4B68FC19 for ; Tue, 28 Jun 2011 23:14:01 +0000 (UTC) Received: from [98.138.90.56] by nm15.bullet.mail.ne1.yahoo.com with NNFMP; 28 Jun 2011 23:14:00 -0000 Received: from [98.138.87.5] by tm9.bullet.mail.ne1.yahoo.com with NNFMP; 28 Jun 2011 23:14:00 -0000 Received: from [127.0.0.1] by omp1005.mail.ne1.yahoo.com with NNFMP; 28 Jun 2011 23:14:00 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 914026.8479.bm@omp1005.mail.ne1.yahoo.com Received: (qmail 7933 invoked by uid 60001); 28 Jun 2011 23:14:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1309302840; bh=JwY3tQV+veBPXTjHrOYzyIlafn1VqzOZZj+KyzZCn9s=; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=DjqZ/4V6c1Hr8vQBmjCXXMW2/YKLo3mR6qHAQX1NsDo5KIxUiv9/kiEKWSfKO9Ch9yCuq1M9wNU/ykP7oIfevNvGmcm02SoqIg6DEO70amNbYPhhTS46OFhqZVxkR8pe9EW60Yw0QMS+h2p5kigRahcCltumcDjKOs0usGgMdP4= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=ABExjLv3d35UqgjJuhPl9v6/AYaJjmzGte6xUw8bcKZ6UrjMV+XK5omxwnoCtzfTiRFVMcNN8ImJimc46iqlvFg8LhE6ZWmOKsPROG09ggHxf5c+DJQ0CqvffQJUpbsAVTqY8RCVjpETdOfyyBavC5MsyXSco64Q+688px4EVko=; X-YMail-OSG: ZygOmYAVM1mtdPiFpoiT2T6PNWPRLhHpgwUHQyirlOW.ZMr 1g4mdfdJRYpo_Q_1SPJyDE420Q9D83oZJFnAg0hCggLGVTr_PXsG51SzTZbI .RZh2gIMQTM_cFxgltTyfal_0_QeViRZyeThGSnbbMGG3PZRJ.JEk0DA8ktS N_OnlEJDkomppHLtnrCpEgFMkf73yl4Wyaum2vP8Q734D8ruaLhYaad0Fqz9 XjabE8Y8LjMbiyLrCEk7nqZJQWO3a.miR4X4eE3aiN2Yoo6mh.IS78YeZsDs rFpF_MbcE0GDeudgsGIRdFO_3XoogIo3xlb5RVqZfspwBxsRmfPc- Received: from [12.202.173.2] by web120004.mail.ne1.yahoo.com via HTTP; Tue, 28 Jun 2011 16:14:00 PDT X-Mailer: YahooMailRC/572 YahooMailWebService/0.8.112.307740 References: <1309217450.43651.YahooMailRC@web120014.mail.ne1.yahoo.com> <20110628010822.GA41399@icarus.home.lan> Message-ID: <1309302840.88674.YahooMailRC@web120004.mail.ne1.yahoo.com> Date: Tue, 28 Jun 2011 16:14:00 -0700 (PDT) From: George Sanders To: Jeremy Chadwick In-Reply-To: <20110628010822.GA41399@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailman-Approved-At: Wed, 29 Jun 2011 01:31:58 +0000 Cc: freebsd-fs@freebsd.org Subject: Re: Improving old-fashioned UFS2 performance with lots of inodes... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Jun 2011 23:14:02 -0000 Hello Jeremy, > > with over 100 million inodes on the filesystem, things go slow. Overall > > throughput is fine, and I have no complaints there, but doing any kind of > > operations with the files is quite slow. Building a file list with rsync, >or > > > doing a cp, or a ln -s of a big dir tree, etc. > > > > Let's assume that the architecture is not changing ... it's going to be >FreeBSD > > > 8.x, using UFS2, and raid6 on actual spinning (7200rpm) disks. > > > > What can I do to speed things up ? > > > > Right now I have these in my loader.conf: > > > > kern.maxdsiz="4096000000"# for fsck > > vm.kmem_size="1610612736"# for big rsyncs > > vm.kmem_size_max="1610612736"# for big rsyncs > > On what exact OS version? Please don't say "8.2", need to know > 8.2-RELEASE, -STABLE, or what. You said "8.x" above, which is too > vague. If 8.2-STABLE you should not be tuning vm.kmem_size_max at all, > and you probably don't need to tune vm.kmem_size either. Ok, right now we are on 6.4-RELEASE, but it is our intention to move to 8.2-RELEASE. If the kmem loader.conf options are no longer relevant in 8.2-STABLE, should I assume that will also be the case when 8.3-RELEASE comes along ? > I also do not understand how vm.kmem_size would affect rsync, since > rsync is a userland application. I imagine you'd want to adjust > kern.maxdsiz and kern.dfldsiz (default dsiz). Well, a huge rsync with 20+ million files dies with memory related errors, and continued to do so until we upped the kmem values that high. We don't know why, but we know it "fixed it". > > and I also set: > > > > vfs.ufs.dirhash_maxmem=64000000 > > This tunable uses memory for a single directorie that has a huge amount > of files in it; AFAIK it does not apply to "large directory structures" > (as in directories within directories within directories). It's obvious > you're just tinkering with random sysctls hoping to gain performance > without really understanding what the sysctls do. :-) To see if you > even need to increase that, try "sysctl -a | grep vfs.ufs.dirhash" and > look at dirhash_mem vs. dirhash_maxmem, as well as dirhash_lowmemcount. No, we actually ALSO have huge directories, and we do indeed need this value. This is the one setting that we actually understand and have empirically measured. > The only thing I can think of on short notice is to have multiple > filesystems (volumes) instead of one large 12TB one. This is pretty > common in the commercial filer world. Ok, that is interesting - are you saying create multiple, smaller UFS filesystems on the single large 12TB raid6 array ? Or are you saying create a handful of smaller arrays ? We have to burn two disks for every raid6 array we make, as I am sure you know, so we really can't split it up into multiple arrays. We could, however, split the single raid6 array into multiple, formatted UFS2 filesystems, but I don't understand how that would help with our performance ? Certainly fsck time would be much shorter, and we could bring up each filesystem after it fsck'd, and then move to the next one ... but in terms of live performance, how does splitting the array into multiple filesystems help ? The nature of a raid array (as I understand it) would have us beating all 12 disks regardless of which UFS filesystems were being used. Can you elaborate ? > Regarding system RAM and UFS2: I have no idea, Kirk might have to > comment on that. > > You could "make use" of system RAM for cache (ZFS ARC) if you were using > ZFS instead of native UFS2. However, if the system has 64GB of RAM, you > need to ask yourself why the system has that amount of RAM in the first > place. For example, if the machine runs mysqld and is tuned to use a > large amount of memory, you really don't ""have"" 64GB of RAM to play > with, and thus wouldn't want mysqld and some filesystem caching model > fighting over memory (e.g. paging/swapping). Actually, the system RAM is there for the purpose of someday using ZFS - and for no other reason. However, it is realistically a few years away on our timeline, unfortunately, so for now we will use UFS2, and as I said ... it seems a shame that UFS2 cannot use system RAM for any good purpose... Or can it ? Anyone ? From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 07:11:18 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DC1A8106564A for ; Wed, 29 Jun 2011 07:11:18 +0000 (UTC) (envelope-from edhoprima@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 5CF178FC1A for ; Wed, 29 Jun 2011 07:11:18 +0000 (UTC) Received: by bwa20 with SMTP id 20so1129447bwa.13 for ; Wed, 29 Jun 2011 00:11:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:from:date:x-google-sender-auth:message-id :subject:to:content-type; bh=WXU8fOWYal32aEJEGq8yNlAQ8/xF77PIxqSzjR57Y6Q=; b=YqUM7561OudAb1UVLzmDD2yLX1UIhEmwei6xiCu+mpB5vTFGPNwk1AttvXP7jXLbqz SjKJC383nf0+gwhdnkMt2CEAo0D6VWcfngfjk3+WlxHf7RfmtwdR/FFcyUa3+ceCBPY8 pxwVpBmy+MkjmWzgRSadvqdeDjQmeEdQnUDTE= Received: by 10.204.136.217 with SMTP id s25mr400447bkt.13.1309331477155; Wed, 29 Jun 2011 00:11:17 -0700 (PDT) MIME-Version: 1.0 Sender: edhoprima@gmail.com Received: by 10.204.119.197 with HTTP; Wed, 29 Jun 2011 00:10:57 -0700 (PDT) From: Edho P Arief Date: Wed, 29 Jun 2011 14:10:57 +0700 X-Google-Sender-Auth: hhuoYyo7q-5hjP9gZ0mUT4wngsM Message-ID: To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: zpool raidz2 missing space? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 07:11:18 -0000 My zpool seems to be missing ~500G of space. One of the disk originally sized at around 1.65T which probably caused it but I've replaced the partition and it should show full 4*1.8T (~7.2T) but it still shows old capacity (4*1.65T ~ 6.6T). What should be done? I've tried export/import cycle but the result is same. Here's the output for various commands: [root@einhart ~]# zpool status pool: dpool state: ONLINE scan: scrub repaired 320K in 4h59m with 0 errors on Tue Jun 28 13:47:32 2011 config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 ONLINE 0 0 0 gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 ONLINE 0 0 0 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 ONLINE 0 0 0 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 ONLINE 0 0 0 errors: No known data errors [root@einhart ~]# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 6.62T 4.61T 2.02T 69% 1.00x ONLINE - [root@einhart ~]# glabel status Name Status Components gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 N/A ad4p5 gptid/1b214d9b-9f95-11e0-9a98-0030678cf5c1 N/A ad4p9 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 N/A ad6p5 gptid/2570c3bb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p9 label/swap1 N/A mirror/gm0b1 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 N/A ad8p5 gptid/99ee1556-9d5c-11e0-997b-0030678cf5c1 N/A ad8p9 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p5 gptid/128af023-9bff-11e0-bd6e-0030678cf5c1 N/A ad10p9 ufs/root0 N/A mirror/gm0a label/swap0 N/A mirror/gm0b0 ufs/home0 N/A mirror/gm0d gptid/94d650c8-a05d-11e0-b636-0030678cf5c1 N/A ad4p1 gptid/025527eb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p1 gptid/6b426779-9d5c-11e0-997b-0030678cf5c1 N/A ad8p1 gptid/31d15ce9-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p1 [root@einhart ~]# gpart show => 34 3907029101 ad4 GPT (1.8T) 34 990 1 freebsd-boot (495k) 1024 1024 - free - (512k) 2048 20971520 2 freebsd-ufs (10G) 20973568 16777216 3 freebsd-swap (8.0G) 37750784 50331648 4 freebsd-ufs (24G) 88082432 3818930304 5 freebsd-zfs (1.8T) 3907012736 16399 9 freebsd-swap (8.0M) => 34 3907029101 ad6 GPT (1.8T) 34 990 1 freebsd-boot (495k) 1024 1024 - free - (512k) 2048 20971520 2 freebsd-ufs (10G) 20973568 16777216 3 freebsd-swap (8.0G) 37750784 50331648 4 freebsd-ufs (24G) 88082432 3818930304 5 freebsd-zfs (1.8T) 3907012736 16399 9 freebsd-swap (8.0M) => 34 3907029101 ad8 GPT (1.8T) 34 990 1 freebsd-boot (495k) 1024 1024 - free - (512k) 2048 20971520 2 freebsd-ufs (10G) 20973568 16777216 3 freebsd-swap (8.0G) 37750784 50331648 4 freebsd-ufs (24G) 88082432 3818930304 5 freebsd-zfs (1.8T) 3907012736 16399 9 freebsd-swap (8.0M) => 34 3907029101 ad10 GPT (1.8T) 34 990 1 freebsd-boot (495k) 1024 1024 - free - (512k) 2048 20971520 2 freebsd-ufs (10G) 20973568 16777216 3 freebsd-swap (8.0G) 37750784 50331648 4 freebsd-ufs (24G) 88082432 3818930304 5 freebsd-zfs (1.8T) 3907012736 16399 9 freebsd-swap (8.0M) [root@einhart ~]# df -h /data Filesystem Size Used Avail Capacity Mounted on dpool/data 3.2T 2.2T 948G 71% /data [root@einhart ~]# zfs list NAME USED AVAIL REFER MOUNTPOINT dpool 2.23T 949G 174K legacy dpool/data 2.23T 949G 2.23T legacy dpool/data/documents 888M 949G 888M legacy dpool/jails 267M 949G 186K legacy dpool/jails/debian 267M 949G 267M legacy dpool/ports-distfiles 2.61G 949G 2.61G /usr/ports/distfiles dpool/ports-tmp 90.5M 949G 90.5M /.ports-tmp dpool/src.cvs 562M 949G 562M /usr/src.cvs dpool/srv 31.0M 949G 31.0M legacy dpool/usr.obj 221K 949G 209K /usr/obj dpool/usr.src 2.22G 949G 2.22G /usr/src [root@einhart ~]# zdb dpool: version: 28 name: 'dpool' state: 0 txg: 407886 pool_guid: 5265065684459342039 hostid: 4266313884 hostname: 'einhart' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 5265065684459342039 children[0]: type: 'raidz' id: 0 guid: 10113259324866791715 nparity: 2 metaslab_array: 23 metaslab_shift: 36 ashift: 12 asize: 7314369150976 is_log: 0 children[0]: type: 'disk' id: 0 guid: 850395506991012944 path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' phys_path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' whole_disk: 0 DTL: 164 children[1]: type: 'disk' id: 1 guid: 11140108939464482570 path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' phys_path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' whole_disk: 1 DTL: 173 children[2]: type: 'disk' id: 2 guid: 2470764073478818097 path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 168 children[3]: type: 'disk' id: 3 guid: 3492436401681256292 path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 165 -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 08:44:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EA5B61065673 for ; Wed, 29 Jun 2011 08:44:40 +0000 (UTC) (envelope-from edhoprima@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6996C8FC12 for ; Wed, 29 Jun 2011 08:44:40 +0000 (UTC) Received: by bwa20 with SMTP id 20so1202770bwa.13 for ; Wed, 29 Jun 2011 01:44:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:content-type; bh=gLpbEEG/RCZT+fExVzhp15MVJiJbcQBavm+kRAqwTdI=; b=lOpn86sMjxl5Vcybp7ZHlfH4u/6XY2XUpIxHoVmdiKPqt9olBg958mAQEr7Vwmodq/ WwtEcoGZJNQuk/aQ0anvSFBW97RndioeOBIaFDGxUV+QAervxR3vVW/cGD+2BCkEUal/ sExn/vE4TZsqkHYSMVdg1SfQNzrMTlZH40vKI= Received: by 10.205.83.133 with SMTP id ag5mr474425bkc.121.1309337079101; Wed, 29 Jun 2011 01:44:39 -0700 (PDT) MIME-Version: 1.0 Sender: edhoprima@gmail.com Received: by 10.204.119.197 with HTTP; Wed, 29 Jun 2011 01:44:19 -0700 (PDT) In-Reply-To: References: From: Edho P Arief Date: Wed, 29 Jun 2011 15:44:19 +0700 X-Google-Sender-Auth: uSBRDIqxDcTeQ8I1mhGg5JVVcLg Message-ID: To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: Re: zpool raidz2 missing space? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 08:44:41 -0000 On Wed, Jun 29, 2011 at 2:10 PM, Edho P Arief wrote: > My zpool seems to be missing ~500G of space. One of the disk > originally sized at around 1.65T which probably caused it but I've > replaced the partition and it should show full 4*1.8T (~7.2T) but it > still shows old capacity (4*1.65T ~ 6.6T). > > What should be done? I've tried export/import cycle but the result is same. > sorry, seems like reboot cycle solved it. [root@einhart ~]# zpool status pool: dpool state: ONLINE scan: scrub in progress since Wed Jun 29 15:35:22 2011 24.4G scanned out of 4.61T at 55.4M/s, 24h6m to go 0 repaired, 0.52% done config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 ONLINE 0 0 0 gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 ONLINE 0 0 0 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 ONLINE 0 0 0 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 ONLINE 0 0 0 errors: No known data errors [root@einhart ~]# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 7.06T 4.61T 2.45T 65% 1.00x ONLINE - [root@einhart ~]# glabel status Name Status Components gptid/94d650c8-a05d-11e0-b636-0030678cf5c1 N/A ad4p1 gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 N/A ad4p5 gptid/1b214d9b-9f95-11e0-9a98-0030678cf5c1 N/A ad4p9 gptid/025527eb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p1 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 N/A ad6p5 gptid/2570c3bb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p9 gptid/6b426779-9d5c-11e0-997b-0030678cf5c1 N/A ad8p1 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 N/A ad8p5 gptid/99ee1556-9d5c-11e0-997b-0030678cf5c1 N/A ad8p9 gptid/31d15ce9-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p1 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p5 gptid/128af023-9bff-11e0-bd6e-0030678cf5c1 N/A ad10p9 ufs/root0 N/A mirror/gm0a label/swap0 N/A stripe/gs0b ufs/home0 N/A stripe/gs0d [root@einhart ~]# zfs list NAME USED AVAIL REFER MOUNTPOINT dpool 2.23T 1.14T 174K legacy dpool/data 2.23T 1.14T 2.23T legacy dpool/data/documents 888M 1.14T 888M legacy dpool/jails 267M 1.14T 186K legacy dpool/jails/debian 267M 1.14T 267M legacy dpool/ports-distfiles 2.61G 1.14T 2.61G /usr/ports/distfiles dpool/ports-tmp 90.6M 1.14T 90.6M /.ports-tmp dpool/src.cvs 562M 1.14T 562M /usr/src.cvs dpool/srv 31.0M 1.14T 31.0M legacy dpool/usr.obj 221K 1.14T 209K /usr/obj dpool/usr.src 2.22G 1.14T 2.22G /usr/src [root@einhart ~]# df -h Filesystem Size Used Avail Capacity Mounted on /dev/ufs/root0 9.7G 6.5G 2.5G 72% / devfs 1.0k 1.0k 0B 100% /dev /dev/ufs/home0 46G 755M 42G 2% /usr/home procfs 4.0k 4.0k 0B 100% /proc linprocfs 4.0k 4.0k 0B 100% /compat/linux/proc dpool/data 3.4T 2.2T 1.1T 66% /data dpool/srv 1.1T 31M 1.1T 0% /srv dpool/data/documents 1.1T 888M 1.1T 0% /data/documents dpool/jails 1.1T 186k 1.1T 0% /jails dpool/jails/debian 1.1T 266M 1.1T 0% /jails/debian dpool/ports-tmp 1.1T 90M 1.1T 0% /.ports-tmp dpool/usr.obj 1.1T 209k 1.1T 0% /usr/obj dpool/ports-distfiles 1.1T 2.6G 1.1T 0% /usr/ports/distfiles dpool/usr.src 1.1T 2.2G 1.1T 0% /usr/src dpool/src.cvs 1.1T 562M 1.1T 0% /usr/src.cvs /data/documents 1.1T 888M 1.1T 0% /usr/home/edho/Documents /data/downloads 3.4T 2.2T 1.1T 66% /usr/home/edho/Downloads [root@einhart ~]# zdb dpool: version: 28 name: 'dpool' state: 0 txg: 407886 pool_guid: 5265065684459342039 hostid: 4266313884 hostname: 'einhart' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 5265065684459342039 children[0]: type: 'raidz' id: 0 guid: 10113259324866791715 nparity: 2 metaslab_array: 23 metaslab_shift: 36 ashift: 12 asize: 7314369150976 is_log: 0 children[0]: type: 'disk' id: 0 guid: 850395506991012944 path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' phys_path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' whole_disk: 0 DTL: 164 children[1]: type: 'disk' id: 1 guid: 11140108939464482570 path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' phys_path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' whole_disk: 1 DTL: 173 children[2]: type: 'disk' id: 2 guid: 2470764073478818097 path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 168 children[3]: type: 'disk' id: 3 guid: 3492436401681256292 path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 165 -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 09:03:02 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 96A921065674; Wed, 29 Jun 2011 09:03:02 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 2B3A38FC08; Wed, 29 Jun 2011 09:03:01 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC4623A.dip.t-dialin.net [79.196.98.58]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id DD3B184400D; Wed, 29 Jun 2011 10:46:35 +0200 (CEST) Received: from webmail.leidinger.net (webmail.Leidinger.net [IPv6:fd73:10c7:2053:1::3:102]) by outgoing.leidinger.net (Postfix) with ESMTP id 32EB12012; Wed, 29 Jun 2011 10:46:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=Leidinger.net; s=outgoing-alex; t=1309337193; bh=TdTgZGsh1g8dYBTrtKV8yMyseE9wUAfYHEfJgW0g4jw=; h=Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding; b=OA4QJmOi49QXvPN7nnXxdhj12226VFhJy0x6lJyHEoMtADAQIdekqecWnPjcJmd5N ZSO19qf5TqaloVsDe/sx20/TxKgS8Be/PTE6duuFdppej13Tr1SmmxKsBMnl/Grcku /HqpjCyJI/zjGenI10cp68e9nn9cM8kp9hQjseNPw7eLfbSfxEmwE8zjxI0lrrfkQN 3qxgiL8K+3NpDmabkU3JXCcrYivV/huRRWbK/oBOiCyrSZRxwFN1lmlNiKnv/TNVMP N4DAzKjBWDzuwzhgL34yaX4gfpMhnwffCPKiIjuNSTmuIsbll8sZIDrVFH9wIDHBwU pdBGBqPLKBsnQ== Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.14.4/Submit) id p5T8kXkY017871; Wed, 29 Jun 2011 10:46:33 +0200 (CEST) (envelope-from Alexander@Leidinger.net) X-Authentication-Warning: webmail.leidinger.net: www set sender to Alexander@Leidinger.net using -f Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Wed, 29 Jun 2011 10:46:33 +0200 Message-ID: <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> Date: Wed, 29 Jun 2011 10:46:33 +0200 From: Alexander Leidinger To: Glen Barber References: <20110628203228.GA4957@onyx.glenbarber.us> In-Reply-To: <20110628203228.GA4957@onyx.glenbarber.us> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.6) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: DD3B184400D.A22FE X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.023, required 6, autolearn=disabled, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10, TW_ZF 0.08) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1309941996.43609@nI0f0mRNFD5V9X5MRkG4Zg X-EBL-Spam-Status: No Cc: fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 09:03:02 -0000 Quoting Glen Barber (from Tue, 28 Jun 2011 16:32:28 -0400): > Hi, > > I'd like to get some feedback on a change I made to 404.status-zfs. > > I added a default behavior to list the pools on the system, in addition to > checking if the pool is healthy. I think it might be useful for others to > have this as the default behavior, for example on systems where dedup is > enabled to track the dedup statistics over time. I do not think this is a bad idea to be able to see the pools... but IMHO it should be configurable (no strong opinion about "enabled or disabled by default"). > The output of the the script after my changes follows: Info to others: this is the default output, there is no special option to track DEDUP. > Checking status of zfs pools: > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > zroot 456G 147G 309G 32% 1.00x ONLINE - > zstore 928G 258G 670G 27% 1.00x ONLINE - > all pools are healthy > > Feedback would be appreciated. A diff is attached. Did you test it with an unhealthy pool? If yes, how does the result look like? For the healthy case we have redundant info (but as the brain is good at pattern matching, I would object to replace the status with the list output, in case someone would suggest this). In the unhealthy case we will surely have more info, my inquiry about it is if an empty line between the list and the status would make it more readable or not. Bye, Alexander. -- NOTICE: -- THE ELEVATORS WILL BE OUT OF ORDER TODAY -- (The nearest working elevator is in the building across the street.) http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 11:04:17 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A025E1065670 for ; Wed, 29 Jun 2011 11:04:17 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from glenbarber.us (onyx.glenbarber.us [199.48.134.227]) by mx1.freebsd.org (Postfix) with SMTP id 664088FC14 for ; Wed, 29 Jun 2011 11:04:17 +0000 (UTC) Received: (qmail 41174 invoked by uid 0); 29 Jun 2011 06:37:34 -0400 Received: from unknown (HELO schism.local) (gjb@76.124.49.145) by 0 with SMTP; 29 Jun 2011 06:37:34 -0400 Message-ID: <4E0B006C.8050000@FreeBSD.org> Date: Wed, 29 Jun 2011 06:37:32 -0400 From: Glen Barber User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.18) Gecko/20110616 Thunderbird/3.1.11 MIME-Version: 1.0 To: Alexander Leidinger References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> In-Reply-To: <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 11:04:17 -0000 Hi Alexander, On 6/29/11 4:46 AM, Alexander Leidinger wrote: >> I added a default behavior to list the pools on the system, in >> addition to >> checking if the pool is healthy. I think it might be useful for >> others to >> have this as the default behavior, for example on systems where dedup is >> enabled to track the dedup statistics over time. > > I do not think this is a bad idea to be able to see the pools... but > IMHO it should be configurable (no strong opinion about "enabled or > disabled by default"). > Agreed. I can add this in. >> The output of the the script after my changes follows: > > Info to others: this is the default output, there is no special option > to track DEDUP. > >> Checking status of zfs pools: >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zroot 456G 147G 309G 32% 1.00x ONLINE - >> zstore 928G 258G 670G 27% 1.00x ONLINE - >> all pools are healthy >> >> Feedback would be appreciated. A diff is attached. > > Did you test it with an unhealthy pool? If yes, how does the result look > like? > I have not, yet. I can do this later today by breaking a mirror. > For the healthy case we have redundant info (but as the brain is good at > pattern matching, I would object to replace the status with the list > output, in case someone would suggest this). In the unhealthy case we > will surely have more info, my inquiry about it is if an empty line > between the list and the status would make it more readable or not. > I will reply later today with of the script with an unhealthy pool, and will make listing the pools configurable. I imagine an empty line would certainly make it more readable in either case. I would be reluctant to replace 'status' output with 'list' output for healthy pools mostly to avoid headaches for people parsing their daily email, specifically looking for (or missing) 'all pools are healthy.' Regards, -- Glen Barber | gjb@FreeBSD.org FreeBSD Documentation Project From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 11:19:18 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1E0FC1065673 for ; Wed, 29 Jun 2011 11:19:18 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta07.emeryville.ca.mail.comcast.net (qmta07.emeryville.ca.mail.comcast.net [76.96.30.64]) by mx1.freebsd.org (Postfix) with ESMTP id 04A598FC13 for ; Wed, 29 Jun 2011 11:19:17 +0000 (UTC) Received: from omta12.emeryville.ca.mail.comcast.net ([76.96.30.44]) by qmta07.emeryville.ca.mail.comcast.net with comcast id 1nJy1h0060x6nqcA7nKFwk; Wed, 29 Jun 2011 11:19:15 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta12.emeryville.ca.mail.comcast.net with comcast id 1nKC1h0091t3BNj8YnKCpK; Wed, 29 Jun 2011 11:19:13 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 4A27C102C19; Wed, 29 Jun 2011 04:19:15 -0700 (PDT) Date: Wed, 29 Jun 2011 04:19:15 -0700 From: Jeremy Chadwick To: Glen Barber Message-ID: <20110629111915.GA75648@icarus.home.lan> References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> <4E0B006C.8050000@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4E0B006C.8050000@FreeBSD.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Alexander Leidinger , fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 11:19:18 -0000 On Wed, Jun 29, 2011 at 06:37:32AM -0400, Glen Barber wrote: > Hi Alexander, > > On 6/29/11 4:46 AM, Alexander Leidinger wrote: > >> I added a default behavior to list the pools on the system, in > >> addition to > >> checking if the pool is healthy. I think it might be useful for > >> others to > >> have this as the default behavior, for example on systems where dedup is > >> enabled to track the dedup statistics over time. > > > > I do not think this is a bad idea to be able to see the pools... but > > IMHO it should be configurable (no strong opinion about "enabled or > > disabled by default"). > > > > Agreed. I can add this in. > > >> The output of the the script after my changes follows: > > > > Info to others: this is the default output, there is no special option > > to track DEDUP. > > > >> Checking status of zfs pools: > >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > >> zroot 456G 147G 309G 32% 1.00x ONLINE - > >> zstore 928G 258G 670G 27% 1.00x ONLINE - > >> all pools are healthy > >> > >> Feedback would be appreciated. A diff is attached. > > > > Did you test it with an unhealthy pool? If yes, how does the result look > > like? > > > > I have not, yet. I can do this later today by breaking a mirror. > > > For the healthy case we have redundant info (but as the brain is good at > > pattern matching, I would object to replace the status with the list > > output, in case someone would suggest this). In the unhealthy case we > > will surely have more info, my inquiry about it is if an empty line > > between the list and the status would make it more readable or not. > > > > I will reply later today with of the script with an unhealthy pool, and > will make listing the pools configurable. I imagine an empty line would > certainly make it more readable in either case. I would be reluctant to > replace 'status' output with 'list' output for healthy pools mostly to > avoid headaches for people parsing their daily email, specifically > looking for (or missing) 'all pools are healthy.' At my workplace we use a heavily modified version of Netsaint, with bits and pieces Nagios-like created. I happened to write the perl code used to monitor our production Solaris systems (~2000+ servers) for ZFS pool status. It parses "zpool status -x" output, monitoring read, write, and checksum errors per pool, vdev, and device, in addition to general pool status. I tested too many conditions, not to mention had to deal with parsing pains as a result of ZFS code changes, plus supporting completely different revisions of Solaris 10 in production. And before someone asks: no, I cannot provide the source (employee agreements, LCA, etc...). I did have to dig through ZFS source code to figure out a bunch of necessary bits too, so don't be surprised if you have to too. My recommendation: just look for pools which are in any state other than ONLINE (don't try to be smart with an OR regex looking for all the combos; it doesn't scale when ZFS changes), and you should also handle situations where a device is currently undergoing manual or automatic device replacement (specifically regex '^[\t\s]+replacing\s+DEGRADED'), which will be important to people who keep spares in pools. This might be difficult with just standard BSD sh, but BSD awk should be able to handle this. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 11:21:17 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9D340106566C for ; Wed, 29 Jun 2011 11:21:17 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from glenbarber.us (onyx.glenbarber.us [199.48.134.227]) by mx1.freebsd.org (Postfix) with SMTP id 519558FC18 for ; Wed, 29 Jun 2011 11:21:17 +0000 (UTC) Received: (qmail 41716 invoked by uid 0); 29 Jun 2011 07:21:15 -0400 Received: from unknown (HELO schism.local) (gjb@76.124.49.145) by 0 with SMTP; 29 Jun 2011 07:21:15 -0400 Message-ID: <4E0B0AAB.5030300@FreeBSD.org> Date: Wed, 29 Jun 2011 07:21:15 -0400 From: Glen Barber User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.18) Gecko/20110616 Thunderbird/3.1.11 MIME-Version: 1.0 To: Alexander Leidinger References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> <4E0B006C.8050000@FreeBSD.org> In-Reply-To: <4E0B006C.8050000@FreeBSD.org> X-Enigmail-Version: 1.1.1 Content-Type: multipart/mixed; boundary="------------070007010905030004020204" Cc: fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 11:21:17 -0000 This is a multi-part message in MIME format. --------------070007010905030004020204 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 6/29/11 6:37 AM, Glen Barber wrote: > I will reply later today with of the script with an unhealthy pool, and > will make listing the pools configurable. I imagine an empty line would > certainly make it more readable in either case. I would be reluctant to > replace 'status' output with 'list' output for healthy pools mostly to > avoid headaches for people parsing their daily email, specifically > looking for (or missing) 'all pools are healthy.' > Might as well do this now, in case I don't have time later today. For completeness, I took one drive in both of my pools offline. (Pardon the long lines.) I also made listing the pools configurable, enabled by default, but it runs only if daily_status_zfs_enable=YES. Feedback would be appreciated. Regards, -- Glen Barber | gjb@FreeBSD.org FreeBSD Documentation Project --------------070007010905030004020204 Content-Type: text/plain; name="zfsoffline.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="zfsoffline.txt" Q2hlY2tpbmcgc3RhdHVzIG9mIHpmcyBwb29sczoKTkFNRSAgICAgU0laRSAgQUxMT0MgICBG UkVFICAgIENBUCAgREVEVVAgIEhFQUxUSCAgQUxUUk9PVAp6cm9vdCAgICA0NTZHICAgMTQ2 RyAgIDMxMEcgICAgMzIlICAxLjAweCAgREVHUkFERUQgIC0KenN0b3JlICAgOTI4RyAgIDI1 OEcgICA2NzBHICAgIDI3JSAgMS4wMHggIERFR1JBREVEICAtCgogIHBvb2w6IHpyb290CiBz dGF0ZTogREVHUkFERUQKc3RhdHVzOiBPbmUgb3IgbW9yZSBkZXZpY2VzIGhhcyBiZWVuIHRh a2VuIG9mZmxpbmUgYnkgdGhlIGFkbWluaXN0cmF0b3IuCiAgICAgICAgU3VmZmljaWVudCBy ZXBsaWNhcyBleGlzdCBmb3IgdGhlIHBvb2wgdG8gY29udGludWUgZnVuY3Rpb25pbmcgaW4g YQogICAgICAgIGRlZ3JhZGVkIHN0YXRlLgphY3Rpb246IE9ubGluZSB0aGUgZGV2aWNlIHVz aW5nICd6cG9vbCBvbmxpbmUnIG9yIHJlcGxhY2UgdGhlIGRldmljZSB3aXRoCiAgICAgICAg J3pwb29sIHJlcGxhY2UnLgogc2Nhbjogc2NydWIgcmVwYWlyZWQgMCBpbiAyaDQwbSB3aXRo IDAgZXJyb3JzIG9uIFRodSBKdW4gMTYgMDA6MTI6NDcgMjAxMQpjb25maWc6CgogICAgICAg IE5BTUUgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFNUQVRF ICAgICBSRUFEIFdSSVRFIENLU1VNCiAgICAgICAgenJvb3QgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgREVHUkFERUQgICAgIDAgICAgIDAgICAgIDAKICAg ICAgICAgIG1pcnJvci0wICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBE RUdSQURFRCAgICAgMCAgICAgMCAgICAgMAogICAgICAgICAgICBncHRpZC9mODc3YzY0YS02 OWMzLTExZGYtYWZmMS0wMDFjYzAxOWI0YjggIE9OTElORSAgICAgICAwICAgICAwICAgICAw CiAgICAgICAgICAgIGdwdGlkL2ZhN2ZkMTlhLTY5YzMtMTFkZi1hZmYxLTAwMWNjMDE5YjRi OCAgT0ZGTElORSAgICAgIDAgICAgIDAgICAgIDAKCmVycm9yczogTm8ga25vd24gZGF0YSBl cnJvcnMKCiAgcG9vbDogenN0b3JlCiBzdGF0ZTogREVHUkFERUQKc3RhdHVzOiBPbmUgb3Ig bW9yZSBkZXZpY2VzIGhhcyBiZWVuIHRha2VuIG9mZmxpbmUgYnkgdGhlIGFkbWluaXN0cmF0 b3IuCiAgICAgICAgU3VmZmljaWVudCByZXBsaWNhcyBleGlzdCBmb3IgdGhlIHBvb2wgdG8g Y29udGludWUgZnVuY3Rpb25pbmcgaW4gYQogICAgICAgIGRlZ3JhZGVkIHN0YXRlLgphY3Rp b246IE9ubGluZSB0aGUgZGV2aWNlIHVzaW5nICd6cG9vbCBvbmxpbmUnIG9yIHJlcGxhY2Ug dGhlIGRldmljZSB3aXRoCiAgICAgICAgJ3pwb29sIHJlcGxhY2UnLgogc2Nhbjogc2NydWIg cmVwYWlyZWQgMCBpbiAyaDQwbSB3aXRoIDAgZXJyb3JzIG9uIFRodSBKdW4gMTYgMTU6MzA6 MDggMjAxMQpjb25maWc6CgogICAgICAgIE5BTUUgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIFNUQVRFICAgICBSRUFEIFdSSVRFIENLU1VNCiAgICAgICAg enN0b3JlICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgREVHUkFE RUQgICAgIDAgICAgIDAgICAgIDAKICAgICAgICAgIG1pcnJvci0wICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICBERUdSQURFRCAgICAgMCAgICAgMCAgICAgMAogICAg ICAgICAgICBncHRpZC82MWQwY2RmOC1jMTM1LTExZGYtOGI3Mi0wMDFjYzAxOWI0YjggIE9O TElORSAgICAgICAwICAgICAwICAgICAwCiAgICAgICAgICAgIGdwdGlkLzY0NTU2MGFkLWMx MzUtMTFkZi04YjcyLTAwMWNjMDE5YjRiOCAgT0ZGTElORSAgICAgIDAgICAgIDAgICAgIDAK CmVycm9yczogTm8ga25vd24gZGF0YSBlcnJvcnMK --------------070007010905030004020204 Content-Type: text/plain; name="periodic.zfs.diff.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="periodic.zfs.diff.txt" SW5kZXg6IHBlcmlvZGljL2RhaWx5LzQwNC5zdGF0dXMtemZzCj09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0t IHBlcmlvZGljL2RhaWx5LzQwNC5zdGF0dXMtemZzCShyZXZpc2lvbiAyMjM2NDUpCisrKyBw ZXJpb2RpYy9kYWlseS80MDQuc3RhdHVzLXpmcwkod29ya2luZyBjb3B5KQpAQCAtMTYsMTIg KzE2LDIxIEBACiAJZWNobwogCWVjaG8gJ0NoZWNraW5nIHN0YXR1cyBvZiB6ZnMgcG9vbHM6 JwogCi0Jb3V0PWB6cG9vbCBzdGF0dXMgLXhgCi0JZWNobyAiJG91dCIKKwljYXNlICIkZGFp bHlfc3RhdHVzX3pmc196cG9vbF9saXN0X2VuYWJsZSIgaW4KKwkJW1l5XVtFZV1bU3NdKQor CQkJbG91dD1genBvb2wgbGlzdGAKKwkJCWVjaG8gIiRsb3V0IgorCQkJZWNobworCQkJOzsK KwkJKikKKwkJCTs7CisJZXNhYworCXNvdXQ9YHpwb29sIHN0YXR1cyAteGAKKwllY2hvICIk c291dCIKIAkjIHpwb29sIHN0YXR1cyAteCBhbHdheXMgZXhpdHMgd2l0aCAwLCBzbyB3ZSBo YXZlIHRvIGludGVycHJldCBpdHMKIAkjIG91dHB1dCB0byBzZWUgd2hhdCdzIGdvaW5nIG9u LgotCWlmIFsgIiRvdXQiID0gImFsbCBwb29scyBhcmUgaGVhbHRoeSIgXAotCSAgICAtbyAi JG91dCIgPSAibm8gcG9vbHMgYXZhaWxhYmxlIiBdOyB0aGVuCisJaWYgWyAiJHNvdXQiID0g ImFsbCBwb29scyBhcmUgaGVhbHRoeSIgXAorCSAgICAtbyAiJHNvdXQiID0gIm5vIHBvb2xz IGF2YWlsYWJsZSIgXTsgdGhlbgogCQlyYz0wCiAJZWxzZQogCQlyYz0xCkluZGV4OiBkZWZh dWx0cy9wZXJpb2RpYy5jb25mCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIGRlZmF1bHRzL3BlcmlvZGlj LmNvbmYJKHJldmlzaW9uIDIyMzY0NSkKKysrIGRlZmF1bHRzL3BlcmlvZGljLmNvbmYJKHdv cmtpbmcgY29weSkKQEAgLTk2LDYgKzk2LDcgQEAKIAogIyA0MDQuc3RhdHVzLXpmcwogZGFp bHlfc3RhdHVzX3pmc19lbmFibGU9Ik5PIgkJCQkjIENoZWNrIFpGUworZGFpbHlfc3RhdHVz X3pmc196cG9vbF9saXN0X2VuYWJsZT0iWUVTIgkJIyBMaXN0IFpGUyBwb29scwogCiAjIDQw NS5zdGF0dXMtYXRhX3JhaWQKIGRhaWx5X3N0YXR1c19hdGFfcmFpZF9lbmFibGU9Ik5PIgkJ CSMgQ2hlY2sgQVRBIHJhaWQgc3RhdHVzCg== --------------070007010905030004020204-- From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 12:08:50 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C186D1065672; Wed, 29 Jun 2011 12:08:50 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id 539E38FC0C; Wed, 29 Jun 2011 12:08:50 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC4623A.dip.t-dialin.net [79.196.98.58]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id AA5AB84400D; Wed, 29 Jun 2011 14:08:37 +0200 (CEST) Received: from webmail.leidinger.net (webmail.Leidinger.net [IPv6:fd73:10c7:2053:1::3:102]) by outgoing.leidinger.net (Postfix) with ESMTP id E0CAF202B; Wed, 29 Jun 2011 14:08:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=Leidinger.net; s=outgoing-alex; t=1309349315; bh=l2pXF+mfjvYy+pxIsAc2NWMWcnQgK+OiHWm7BM47EuI=; h=Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding; b=PsePKWILa2hhHDAIiIIcJlAGQgjHgJJ0T8ZSYLtvC/vywxI25vc/z/Ji1FzJk/qzb Rm9hQhMCbv4087dsQe62xaBg5vT/WMXsg7Bqct7FDIcMpMUCG6zjDvWnJqaryB3BYR cR5Dh/7uWiI9qEIh2iOwm/2S87riddEXSYtSU3MJ0armUk1jah9mExQn6NvLDDCDjA NCOGmQHxfP75I5Cz6Br02roMOqXqciWDVJptV5FxBTIdqvqNWU/1CLEn2AlxG3GrKH EqjiYwUd/Fm7iHCU5N+99YcqmUvbbOAUgMYQW5xiF+6BniBnMTcRS+TLDL9BPX+nzI JLhHYPI+HIN0Q== Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.14.4/Submit) id p5TC8YAP030401; Wed, 29 Jun 2011 14:08:34 +0200 (CEST) (envelope-from Alexander@Leidinger.net) X-Authentication-Warning: webmail.leidinger.net: www set sender to Alexander@Leidinger.net using -f Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Wed, 29 Jun 2011 14:08:34 +0200 Message-ID: <20110629140834.59115su2x8nk8gjm@webmail.leidinger.net> Date: Wed, 29 Jun 2011 14:08:34 +0200 From: Alexander Leidinger To: Glen Barber References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> <4E0B006C.8050000@FreeBSD.org> <4E0B0AAB.5030300@FreeBSD.org> In-Reply-To: <4E0B0AAB.5030300@FreeBSD.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.6) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: AA5AB84400D.A3772 X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.1, required 6, autolearn=disabled, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1309954118.03761@I4juFNFdwA5+020akKjJlg X-EBL-Spam-Status: No Cc: fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 12:08:50 -0000 Quoting Glen Barber (from Wed, 29 Jun 2011 07:21:15 -0400): > On 6/29/11 6:37 AM, Glen Barber wrote: >> I will reply later today with of the script with an unhealthy pool, and >> will make listing the pools configurable. I imagine an empty line would >> certainly make it more readable in either case. I would be reluctant to >> replace 'status' output with 'list' output for healthy pools mostly to >> avoid headaches for people parsing their daily email, specifically >> looking for (or missing) 'all pools are healthy.' >> > > Might as well do this now, in case I don't have time later today. > > For completeness, I took one drive in both of my pools offline. (Pardon > the long lines.) I also made listing the pools configurable, enabled by > default, but it runs only if daily_status_zfs_enable=YES. > > Feedback would be appreciated. Good news: I see no problems in your patch. Bad news: I detected that I forgot to add docs to the man page of periodic.conf for the daily_status_zfs_enable, and as such I can not complain that you forgot to do so for the daily_status_zfs_zpool_list_enable switch. Bye, Alexander. -- Monday, n.: In Christian countries, the day after the baseball game. -- Ambrose Bierce, "The Devil's Dictionary" http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 12:21:01 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9574F106578D for ; Wed, 29 Jun 2011 12:21:01 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from glenbarber.us (onyx.glenbarber.us [199.48.134.227]) by mx1.freebsd.org (Postfix) with SMTP id 44E418FC1F for ; Wed, 29 Jun 2011 12:21:01 +0000 (UTC) Received: (qmail 42457 invoked by uid 0); 29 Jun 2011 08:21:00 -0400 Received: from unknown (HELO schism.local) (gjb@75.146.225.65) by 0 with SMTP; 29 Jun 2011 08:21:00 -0400 Message-ID: <4E0B18AB.7030406@FreeBSD.org> Date: Wed, 29 Jun 2011 08:20:59 -0400 From: Glen Barber User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.18) Gecko/20110616 Thunderbird/3.1.11 MIME-Version: 1.0 To: Alexander Leidinger References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> <4E0B006C.8050000@FreeBSD.org> <4E0B0AAB.5030300@FreeBSD.org> <20110629140834.59115su2x8nk8gjm@webmail.leidinger.net> In-Reply-To: <20110629140834.59115su2x8nk8gjm@webmail.leidinger.net> X-Enigmail-Version: 1.1.1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 12:21:01 -0000 On 6/29/11 8:08 AM, Alexander Leidinger wrote: > Good news: I see no problems in your patch. > Bad news: I detected that I forgot to add docs to the man page of > periodic.conf for the daily_status_zfs_enable, and as such I can not > complain that you forgot to do so for the > daily_status_zfs_zpool_list_enable switch. > I'll document both, and get a final patch to you. I intentionally didn't change periodic.conf.5 in case there were either problems with the patch, or feedback against the changes. Thanks. Regards, -- Glen Barber | gjb@FreeBSD.org FreeBSD Documentation Project From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 12:45:08 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3E77E10656D3 for ; Wed, 29 Jun 2011 12:45:08 +0000 (UTC) (envelope-from freebsd-fs@m.gmane.org) Received: from lo.gmane.org (lo.gmane.org [80.91.229.12]) by mx1.freebsd.org (Postfix) with ESMTP id E82FD8FC0C for ; Wed, 29 Jun 2011 12:45:06 +0000 (UTC) Received: from list by lo.gmane.org with local (Exim 4.69) (envelope-from ) id 1Qbu8f-0000KU-LJ for freebsd-fs@freebsd.org; Wed, 29 Jun 2011 14:45:05 +0200 Received: from l00144.deltares.nl ([145.9.223.26]) by main.gmane.org with esmtp (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 29 Jun 2011 14:45:05 +0200 Received: from leroy.vanlogchem by l00144.deltares.nl with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00 for ; Wed, 29 Jun 2011 14:45:05 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Leroy van Logchem Date: Wed, 29 Jun 2011 12:39:29 +0000 (UTC) Lines: 30 Message-ID: References: <4DB8EF02.8060406@bk.ru> <1079311802.20110428070300@nitronet.pl> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Complaints-To: usenet@dough.gmane.org X-Gmane-NNTP-Posting-Host: sea.gmane.org User-Agent: Loom/3.14 (http://gmane.org/) X-Loom-IP: 145.9.223.26 (Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30) Subject: Re: ZFS v28 for 8.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 12:45:08 -0000 > > sys/cddl/compat/opensolaris/sys/sysmacros.h is supposed to be deleted, > > so you can ignore the fail and delete the file manually. > > > > I was rebuilding my system two weeks ago and same patch applied > > correctly on amd64, and everything works fine. You could try wiping > > /usr/src (keeping kernel config somewhere safe), csuping it again, and > > deleting /usr/obj before building. > > > > BTW. Why people still cling to i386? > > > > > > _______________________________________________ > > freebsd-fs freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe freebsd.org" > > > > also there is no such thing as 8.2-STABLE, there is 8-STABLE, and > 8.2-RELEASE, and 8.2-RELEASE with security patches aka RELENG_8_2_0 The steps taken to resolve the above: - rm -rf /usr/src/* - cvsup using tag=RELENG_8_2 ( using RELENG_8_2_0 doesn't return files ) - (cd /usr/src ; xzcat ~/releng-8.2-zfsv28-20110616.patch.xz | patch) - rm /usr/src/sys/cddl/compat/opensolaris/sys/sysmacros.h.* Then followed Handbook chapter 24.7 "Rebuilding world". @Martin Matuska: thanks for providing v28 patches From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 13:15:50 2011 Return-Path: Delivered-To: fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2028C10656D5; Wed, 29 Jun 2011 13:15:50 +0000 (UTC) (envelope-from alexander@leidinger.net) Received: from mail.ebusiness-leidinger.de (mail.ebusiness-leidinger.de [217.11.53.44]) by mx1.freebsd.org (Postfix) with ESMTP id A783A8FC08; Wed, 29 Jun 2011 13:15:49 +0000 (UTC) Received: from outgoing.leidinger.net (p4FC4623A.dip.t-dialin.net [79.196.98.58]) by mail.ebusiness-leidinger.de (Postfix) with ESMTPSA id 7DCAD84400D; Wed, 29 Jun 2011 15:15:34 +0200 (CEST) Received: from webmail.leidinger.net (webmail.Leidinger.net [IPv6:fd73:10c7:2053:1::3:102]) by outgoing.leidinger.net (Postfix) with ESMTP id B782F2035; Wed, 29 Jun 2011 15:15:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=Leidinger.net; s=outgoing-alex; t=1309353331; bh=tbqpZMONtZ5phZ+zCklqqiwA1su5HsbQiCB3l7RggFM=; h=Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding; b=b7l9TTG6CZXN5W/+S0WXBcG75nx0ZJcZVPGmUmstfedHA0DeaVbNCzSxSbg6eglyy XBYLatF6Mo8sqr+TK4YZWrMhoxpWgj4way9eJgBzTZoI1QB7k/InT2DAui3wJ6i9+A qijP2IVdU2QXL+NsmFsYdgFE09Pm8jf8DW7fIRcpy4lZycH+9AmnkqcAw2DCJ8nE3+ wA61d27XDxbBYkcViVnhb29b0yhImCCvidsGSX5UGR3usSpHNqH+KIYXjRJQ7a9llD YGLL+aKxsKGO2lBySlQt27yTd2FBdEFkNZ4cQdBUTV5kTwGwvI8Mbw5IuAGDS+q2nG 64kiT7kXdv89A== Received: (from www@localhost) by webmail.leidinger.net (8.14.4/8.14.4/Submit) id p5TDFU86034619; Wed, 29 Jun 2011 15:15:30 +0200 (CEST) (envelope-from Alexander@Leidinger.net) X-Authentication-Warning: webmail.leidinger.net: www set sender to Alexander@Leidinger.net using -f Received: from pslux.ec.europa.eu (pslux.ec.europa.eu [158.169.9.14]) by webmail.leidinger.net (Horde Framework) with HTTP; Wed, 29 Jun 2011 15:15:30 +0200 Message-ID: <20110629151530.13154p1oc899fhwy@webmail.leidinger.net> Date: Wed, 29 Jun 2011 15:15:30 +0200 From: Alexander Leidinger To: Jeremy Chadwick References: <20110628203228.GA4957@onyx.glenbarber.us> <20110629104633.26824evikzh8tgtl@webmail.leidinger.net> <4E0B006C.8050000@FreeBSD.org> <20110629111915.GA75648@icarus.home.lan> In-Reply-To: <20110629111915.GA75648@icarus.home.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; DelSp="Yes"; format="flowed" Content-Disposition: inline Content-Transfer-Encoding: 7bit User-Agent: Dynamic Internet Messaging Program (DIMP) H3 (1.1.6) X-EBL-MailScanner-Information: Please contact the ISP for more information X-EBL-MailScanner-ID: 7DCAD84400D.A1F3E X-EBL-MailScanner: Found to be clean X-EBL-MailScanner-SpamCheck: not spam, spamhaus-ZEN, SpamAssassin (not cached, score=-0.1, required 6, autolearn=disabled, DKIM_SIGNED 0.10, DKIM_VALID -0.10, DKIM_VALID_AU -0.10) X-EBL-MailScanner-From: alexander@leidinger.net X-EBL-MailScanner-Watermark: 1309958136.28546@KBItfFGpgR+GbYh/jpKrCQ X-EBL-Spam-Status: No Cc: Glen Barber , fs@FreeBSD.org Subject: Re: [RFC] [patch] periodic status-zfs: list pools in daily emails X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 13:15:50 -0000 Quoting Jeremy Chadwick (from Wed, 29 Jun 2011 04:19:15 -0700): > At my workplace we use a heavily modified version of Netsaint, with bits > and pieces Nagios-like created. I happened to write the perl code used > to monitor our production Solaris systems (~2000+ servers) for ZFS pool > status. It parses "zpool status -x" output, monitoring read, write, and > checksum errors per pool, vdev, and device, in addition to general pool > status. I tested too many conditions, not to mention had to deal with > parsing pains as a result of ZFS code changes, plus supporting > completely different revisions of Solaris 10 in production. And before > someone asks: no, I cannot provide the source (employee agreements, LCA, > etc...). I did have to dig through ZFS source code to figure out a > bunch of necessary bits too, so don't be surprised if you have to too. > > My recommendation: just look for pools which are in any state other than > ONLINE (don't try to be smart with an OR regex looking for all the > combos; it doesn't scale when ZFS changes), and you should also handle > situations where a device is currently undergoing manual or automatic > device replacement (specifically regex '^[\t\s]+replacing\s+DEGRADED'), > which will be important to people who keep spares in pools. This might > be difficult with just standard BSD sh, but BSD awk should be able to > handle this. Thanks for your suggestions, but the script is intentionally dump: It runs "zpool status" and looks for "all pools are healthy". If this line is not there, the output is marked as important (this is important if you decided to configure periodic.conf to skip unimportant output). All the rest is up to the person which reads the daily run output. The zpool list output which is added in the patch under discussion, is just displaying "zpool list" additionally to the output of zpool status (if activated). Bye, Alexander. -- "I'll reason with him." -- Vito Corleone, "Chapter 14", page 200 http://www.Leidinger.net Alexander @ Leidinger.net: PGP ID = B0063FE7 http://www.FreeBSD.org netchild @ FreeBSD.org : PGP ID = 72077137 From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 15:22:01 2011 Return-Path: Delivered-To: fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CF8DA106564A for ; Wed, 29 Jun 2011 15:22:01 +0000 (UTC) (envelope-from mattblists@icritical.com) Received: from mail1.icritical.com (mail1.icritical.com [93.95.13.41]) by mx1.freebsd.org (Postfix) with SMTP id 247F38FC14 for ; Wed, 29 Jun 2011 15:22:00 +0000 (UTC) Received: (qmail 358 invoked from network); 29 Jun 2011 14:55:17 -0000 Received: from localhost (127.0.0.1) by mail1.icritical.com with SMTP; 29 Jun 2011 14:55:17 -0000 Received: (qmail 350 invoked by uid 599); 29 Jun 2011 14:55:17 -0000 Received: from unknown (HELO icritical.com) (212.57.254.146) by mail1.icritical.com (qpsmtpd/0.28) with ESMTP; Wed, 29 Jun 2011 15:55:17 +0100 Message-ID: <4E0B3CD1.7030007@icritical.com> Date: Wed, 29 Jun 2011 15:55:13 +0100 From: Matt Burke User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.15) Gecko/20110403 Thunderbird/3.1.9 MIME-Version: 1.0 To: fs@FreeBSD.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 29 Jun 2011 14:55:13.0654 (UTC) FILETIME=[8CFB2960:01CC366C] X-Virus-Scanned: by iCritical at mail1.icritical.com Cc: Subject: gptzfsboot bug? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 15:22:01 -0000 When you have a log partition to a non-bootable zfs pool discovered before a bootable zfs pool, the 2nd stage bootloader will break because the check for ZPOOL_CONFIG_IS_LOG will never find it - at least on my system. The problem is that ZFS uses nested nvlist structures, the 'is_log' attribute is held below the root level, and nvlist_find doesn't recurse. Attached is a diff against 8-STABLE as of a few minutes ago. C is not my main language and I've been looking at zfs source for less than a day, so please do check it for correctness although it works here. The xdr_int() call is used to bump p to the beginning of the embedded list. --- zfsimpl.c.orig 2011-06-29 14:29:49.460537991 +0100 +++ zfsimpl.c 2011-06-29 15:16:20.526890896 +0100 @@ -173,10 +173,20 @@ (const unsigned char*) p; return (0); } else { return (EIO); } + } else if (!memcmp(ZPOOL_CONFIG_VDEV_TREE, pairname, namelen) + && pairtype == DATA_TYPE_NVLIST) { + /* + * If we find an nvlist, recurse into it + */ + xdr_int(&p, &elements); + if (0 == nvlist_find(p, name, type, elementsp, valuep)) + return 0; + /* reset position in case find fails mid-way */ + p = pair + encoded_size; } else { /* * Not the pair we are looking for, skip to the next one. */ p = pair + encoded_size; As an aside, is it sensible for zfsboot.c to leave the prompt at /boot/kernel/kernel, or even bother trying it given zfs is usually built as a module and will need /boot/zfsloader to load everything first? Would it be an idea to do something like this? printf("First boot attempt failed.\n"); STAILQ_FOREACH(spa, &zfs_pools, spa_link) { printf("Trying %s:%s\n", spa->spa_name, kname); zfs_mount_pool(spa); load(); printf("Failed.\n"); } Matt. -- From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 21:31:16 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CAB80106564A for ; Wed, 29 Jun 2011 21:31:16 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta09.emeryville.ca.mail.comcast.net (qmta09.emeryville.ca.mail.comcast.net [76.96.30.96]) by mx1.freebsd.org (Postfix) with ESMTP id B25658FC1C for ; Wed, 29 Jun 2011 21:31:16 +0000 (UTC) Received: from omta22.emeryville.ca.mail.comcast.net ([76.96.30.89]) by qmta09.emeryville.ca.mail.comcast.net with comcast id 1x471h0021vN32cA9xXDB8; Wed, 29 Jun 2011 21:31:13 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta22.emeryville.ca.mail.comcast.net with comcast id 1xWd1h07K1t3BNj8ixWppc; Wed, 29 Jun 2011 21:30:57 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 1CBDD102C19; Wed, 29 Jun 2011 14:30:54 -0700 (PDT) Date: Wed, 29 Jun 2011 14:30:54 -0700 From: Jeremy Chadwick To: Leroy van Logchem Message-ID: <20110629213054.GA85818@icarus.home.lan> References: <4DB8EF02.8060406@bk.ru> <1079311802.20110428070300@nitronet.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 for 8.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 21:31:16 -0000 On Wed, Jun 29, 2011 at 12:39:29PM +0000, Leroy van Logchem wrote: > > > sys/cddl/compat/opensolaris/sys/sysmacros.h is supposed to be deleted, > > > so you can ignore the fail and delete the file manually. > > > > > > I was rebuilding my system two weeks ago and same patch applied > > > correctly on amd64, and everything works fine. You could try wiping > > > /usr/src (keeping kernel config somewhere safe), csuping it again, and > > > deleting /usr/obj before building. > > > > > > BTW. Why people still cling to i386? > > > > > > > > > _______________________________________________ > > > freebsd-fs freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > > > To unsubscribe, send any mail to "freebsd-fs-unsubscribe freebsd.org" > > > > > > > also there is no such thing as 8.2-STABLE, there is 8-STABLE, and > > 8.2-RELEASE, and 8.2-RELEASE with security patches aka RELENG_8_2_0 > > The steps taken to resolve the above: > > - rm -rf /usr/src/* > - cvsup using tag=RELENG_8_2 ( using RELENG_8_2_0 doesn't return files ) > - (cd /usr/src ; xzcat ~/releng-8.2-zfsv28-20110616.patch.xz | patch) > - rm /usr/src/sys/cddl/compat/opensolaris/sys/sysmacros.h.* > > Then followed Handbook chapter 24.7 "Rebuilding world". > > @Martin Matuska: thanks for providing v28 patches BTW, whenever you nuke src, you should probably nuke the csup (or cvsup in your case; not sure why you're using that) CVS "database" as well. For csup, this lives in /var/db/sup. For cvsup, this lives in /usr/sup. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 22:10:42 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A4B041065670 for ; Wed, 29 Jun 2011 22:10:42 +0000 (UTC) (envelope-from numisemis@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 2F5948FC12 for ; Wed, 29 Jun 2011 22:10:41 +0000 (UTC) Received: by bwa20 with SMTP id 20so1995698bwa.13 for ; Wed, 29 Jun 2011 15:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=references:from:in-reply-to:mime-version:date:message-id:subject:to :cc:content-type; bh=K4dmKNAptC1+XqaZE39Dek5rRyUKQwM295Xtu1Ksmdc=; b=wzuL6A5cXyXyaxk7JbI3jnDEzGI0918lMWQwhn/AI5IPDWZMEjU+VSwwUE6qRvr7MC NPCAYizNvlgiv7dRIYHStO+WvFffAYtW+AQXbACcHXeUcaVa9FdwSd4MoyNRtD3mYdbT Q3IkXFfKX59DWPQZ/OY07Bpgni2HnUJbD1OQ0= Received: by 10.204.12.68 with SMTP id w4mr1238128bkw.160.1309385441056; Wed, 29 Jun 2011 15:10:41 -0700 (PDT) References: <4DB8EF02.8060406@bk.ru> <1079311802.20110428070300@nitronet.pl> <20110629213054.GA85818@icarus.home.lan> From: =?UTF-8?Q?=C5=A0imun_Mikecin?= In-Reply-To: <20110629213054.GA85818@icarus.home.lan> Mime-Version: 1.0 (iPhone Mail 8J2) Date: Thu, 30 Jun 2011 00:10:33 +0200 Message-ID: <-8448816796365782292@unknownmsgid> To: Jeremy Chadwick Content-Type: text/plain; charset=ISO-8859-1 Cc: "freebsd-fs@freebsd.org" , Leroy van Logchem Subject: Re: ZFS v28 for 8.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 22:10:42 -0000 29. 6. 2011., u 23:31, Jeremy Chadwick napisao: >> > > BTW, whenever you nuke src, you should probably nuke the csup (or cvsup > in your case; not sure why you're using that) CVS "database" as well. > For csup, this lives in /var/db/sup. For cvsup, this lives in /usr/sup. Would using svn (svn.freebsd.org repository) instead of csup be a better solution? Which repository (svn or CVS) is master, and which one is replicated? > From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 22:55:47 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4A9C5106566B for ; Wed, 29 Jun 2011 22:55:47 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta02.emeryville.ca.mail.comcast.net (qmta02.emeryville.ca.mail.comcast.net [76.96.30.24]) by mx1.freebsd.org (Postfix) with ESMTP id 2F1338FC1A for ; Wed, 29 Jun 2011 22:55:46 +0000 (UTC) Received: from omta15.emeryville.ca.mail.comcast.net ([76.96.30.71]) by qmta02.emeryville.ca.mail.comcast.net with comcast id 1yvY1h0011Y3wxoA2yvkWL; Wed, 29 Jun 2011 22:55:44 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta15.emeryville.ca.mail.comcast.net with comcast id 1yuS1h00Y1t3BNj8byuTcs; Wed, 29 Jun 2011 22:54:28 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 0E1B4102C19; Wed, 29 Jun 2011 15:55:44 -0700 (PDT) Date: Wed, 29 Jun 2011 15:55:44 -0700 From: Jeremy Chadwick To: ??imun Mikecin Message-ID: <20110629225544.GA87060@icarus.home.lan> References: <4DB8EF02.8060406@bk.ru> <1079311802.20110428070300@nitronet.pl> <20110629213054.GA85818@icarus.home.lan> <-8448816796365782292@unknownmsgid> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <-8448816796365782292@unknownmsgid> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: "freebsd-fs@freebsd.org" , Leroy van Logchem Subject: Re: ZFS v28 for 8.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 22:55:47 -0000 On Thu, Jun 30, 2011 at 12:10:33AM +0200, ??imun Mikecin wrote: > 29. 6. 2011., u 23:31, Jeremy Chadwick napisao: > > BTW, whenever you nuke src, you should probably nuke the csup (or cvsup > > in your case; not sure why you're using that) CVS "database" as well. > > For csup, this lives in /var/db/sup. For cvsup, this lives in /usr/sup. > > Would using svn (svn.freebsd.org repository) instead of csup be a > better solution? > Which repository (svn or CVS) is master, and which one is replicated? I don't use Subversion so I can't answer that part of the question. I don't quite understand how/why replication (master vs. mirror) is a concern here. My point is that with csup/cvsup, people need to be aware that there is "CVS checkout"-like file also associated with things which lives in /var/db/sup (for csup) and /usr/sup (for cvsup) which needs to be removed if you pull files out from underneath it (e.g. rm -fr /usr/src/*, etc.). I could write another 5-6 paragraphs about this explaining the technical aspects, comparing it to stock CVS (re: CVS/ directory during checkout) and other things, but it's besides the point. My point is that most people don't know of /var/db/sup and /usr/sup, and when you make them aware of it, they go "...oh! Maybe that's how my local copy doesn't match with what's on the server!" -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB | From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 02:54:01 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 223BB106564A for ; Thu, 30 Jun 2011 02:54:01 +0000 (UTC) (envelope-from jhellenthal@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id C36648FC14 for ; Thu, 30 Jun 2011 02:54:00 +0000 (UTC) Received: by ywf7 with SMTP id 7so949541ywf.13 for ; Wed, 29 Jun 2011 19:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to; bh=N299+HxmHVkBgtMok/TGyXipqI2TKJ0/zd1ixeziP/M=; b=b3zKcmBTxPOOF4ghr2ZJsmrhyUUFTlmMDBghA02me04J6qJDzeNdTMomiZdVz7Wau8 pBG/J+W77uz3Hp/OKK4gya0/WMZr73usoCwvh3j9h+OIuym7pOZKWrG0rgOz8AwGsrUn Vp3zT61lLjf+MwnG9Z6uMpHGzTwGNTEl1dszU= Received: by 10.91.157.18 with SMTP id j18mr1369948ago.110.1309402439924; Wed, 29 Jun 2011 19:53:59 -0700 (PDT) Received: from DataIX.net (adsl-99-190-86-179.dsl.klmzmi.sbcglobal.net [99.190.86.179]) by mx.google.com with ESMTPS id x33sm1602394ana.48.2011.06.29.19.53.57 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 29 Jun 2011 19:53:58 -0700 (PDT) Sender: "J. Hellenthal" Received: from DataIX.net (localhost [127.0.0.1]) by DataIX.net (8.14.5/8.14.5) with ESMTP id p5U2rsDW050325 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 29 Jun 2011 22:53:55 -0400 (EDT) (envelope-from jhell@DataIX.net) Received: (from jhell@localhost) by DataIX.net (8.14.5/8.14.5/Submit) id p5U2rs9g050324; Wed, 29 Jun 2011 22:53:54 -0400 (EDT) (envelope-from jhell@DataIX.net) Date: Wed, 29 Jun 2011 22:53:54 -0400 From: jhell To: ??imun Mikecin Message-ID: <20110630025354.GB41789@DataIX.net> References: <4DB8EF02.8060406@bk.ru> <1079311802.20110428070300@nitronet.pl> <20110629213054.GA85818@icarus.home.lan> <-8448816796365782292@unknownmsgid> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="dDRMvlgZJXvWKvBx" Content-Disposition: inline In-Reply-To: <-8448816796365782292@unknownmsgid> Cc: "freebsd-fs@freebsd.org" , Leroy van Logchem Subject: Re: ZFS v28 for 8.2-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 02:54:01 -0000 --dDRMvlgZJXvWKvBx Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Jun 30, 2011 at 12:10:33AM +0200, ??imun Mikecin wrote: > 29. 6. 2011., u 23:31, Jeremy Chadwick napisao: >=20 > >> > > > > BTW, whenever you nuke src, you should probably nuke the csup (or cvsup > > in your case; not sure why you're using that) CVS "database" as well. > > For csup, this lives in /var/db/sup. For cvsup, this lives in /usr/sup. >=20 > Would using svn (svn.freebsd.org repository) instead of csup be a > better solution? > Which repository (svn or CVS) is master, and which one is replicated? > > CVS revisions are replicated from the cvs2svn[1] so backward compatibility isn't broken for those that rely on programs like csup(1) or cvsup(1) from ports. If you can spare another ~500MB of disk space to use svn instead then I would highly recommend it as benefits of using svn[2] to track sources are much more than that of csup(1) or cvsup(1). Anyway, not to cut you short but if your using 8.2-STABLE and YES... that's "8.2-STABLE" meaning it is the stable branch after the 8.2-RELEASE! then you can safely checkout the sources from [3] and ZFSv28 has already been merged no patching necessary. For those that believe it really is "8-STABLE" sorry but that could designate that branch anyway from 8.0 -> 8.3 and after. You should at least have the diligence to specify branch at what period of release as that is pertinent to where the sources were at that given time. uname -a: FreeBSD disbatch.dataix.net 8.2-STABLE FreeBSD 8.2-STABLE #0 r223343M 352:951b1b185f19 Tue Jun 21 11:14:51 EDT 2011 jhell@DataIX.net:/usr/obj/usr/src/sys/DISBATCH i386 1). http://svn.freebsd.org/base/cvs2svn/ 2). ports/devel/subversion-freebsd 3). svn co svn://svn.freebsd.org/base/stable/8/ /usr/src/ Regards. --dDRMvlgZJXvWKvBx Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (FreeBSD) Comment: http://bit.ly/0x89D8547E iQEcBAEBAgAGBQJOC+VBAAoJEJBXh4mJ2FR+o9wH/36NEGzhGzp2fXaKjwUK96WR t+wKaRHIpDOxyKgdBbCEvgtXrQDYPmlgbr9V04vi9q8A3GhdqS+ZxcEcVYADxQiG xKAlEPo5ccMm9AgXdKrblNUEqabJCFx4RTSxLDYOCp9DQ60ROAnUjYLXzSBWVN+R dWjzGjgx6nNNuzrR03TTtzHyFqN3usiklsAAij9YoyClFQSIyghFHERHlBoQ01qo CErYtLfALne7KLrBGBdCac8KZn5tUptZTVIUwTyY3aNzeyNX2cglaT/cpTA5qNrk 7zrttTsB+Ymctt7ApioaBkXoPan3xpkm8dyaIKBYDCQ+cmH6Nugtrs8U19t8W9g= =XUYy -----END PGP SIGNATURE----- --dDRMvlgZJXvWKvBx-- From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 06:54:44 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 12A08106566B for ; Thu, 30 Jun 2011 06:54:44 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id A4C6C8FC18 for ; Thu, 30 Jun 2011 06:54:43 +0000 (UTC) Received: by wwe6 with SMTP id 6so1902836wwe.31 for ; Wed, 29 Jun 2011 23:54:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=huAliHFp6usOKvg5hXtoRAAatzzIfQzv3SZKTn6WIho=; b=f1L4deL9GK9kb4CIMAcIBf6bx5GR9wr5MShCewTNLjFPnG1bK5eICThwe4opf+vvY5 uz4j1C1XLuFsRInkqPcHnoN6UYr9UpdlhmXsqBOr0VPu2y3D+wjsgzXTqu61Z2+ndqed kBYrNAcAItSrDpp0vqGOeuy9VwsVguG01wiI8= MIME-Version: 1.0 Received: by 10.217.3.80 with SMTP id q58mr818340wes.53.1309415340120; Wed, 29 Jun 2011 23:29:00 -0700 (PDT) Received: by 10.216.163.148 with HTTP; Wed, 29 Jun 2011 23:29:00 -0700 (PDT) Date: Thu, 30 Jun 2011 00:29:00 -0600 Message-ID: From: Kurt Touet To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: ZFS v28 array doesn't expand with larger disks in mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 06:54:44 -0000 I have an admittedly odd zfs v28 array configuration under stable/8 r223484: # zpool status storage pool: storage state: ONLINE scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:46 2011 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 ad14 ONLINE 0 0 0 ad6 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad4 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ad20 ONLINE 0 0 0 ad18 ONLINE 0 0 0 This was simply due to the need to expand the size of the original raidz1 only array and constraints within the box. All drives in the box _were_ 1.5TB. I had a drive in the mirror die this week, and I had 2 spare 2TB drives on hand. So, I decided to replace both of the 1.5TB drives in the array with 2TB drives (and free up a little more space on the box). However, after replacing both drives, the array did not expand in size. It still acts as if the mirror contains 1.5TB drives: storage 6.28T 548G raidz1 5.07T 399G mirror 1.21T 150G Is this normal behaviour? It was my understanding that zfs automatically adapted to having additional drive space in vdevs. -kurt From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 07:03:59 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 80EBE1065675 for ; Thu, 30 Jun 2011 07:03:59 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 192E28FC0C for ; Thu, 30 Jun 2011 07:03:58 +0000 (UTC) Received: by wyg24 with SMTP id 24so1830333wyg.13 for ; Thu, 30 Jun 2011 00:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=olCK2GKj7gwTLEEcZk6HFDkkBET2ZxKfFMe6VFJxFTI=; b=qJ1Hu4yYk35z3Z1CVPunEpd8pJqGYG0pAMP3jNz4EzgyaNIOqBMgiOWSXaucv44LAs 5nA1QMvvW0mpF4TbYbQ1Q2TD8t6sTv3HlK34VnetJOD3EFxFGSS0z7oxWy+QmW4dSjGg 7Mq53rlDslNCTiZrOeksjcaOUItmhYM+M+HiA= MIME-Version: 1.0 Received: by 10.216.139.37 with SMTP id b37mr2432733wej.41.1309417437762; Thu, 30 Jun 2011 00:03:57 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.216.135.169 with HTTP; Thu, 30 Jun 2011 00:03:57 -0700 (PDT) In-Reply-To: References: Date: Thu, 30 Jun 2011 00:03:57 -0700 X-Google-Sender-Auth: ycmNHEK4zyloMV3iGYygoSnqhH8 Message-ID: From: Artem Belevich To: Kurt Touet Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 array doesn't expand with larger disks in mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 07:03:59 -0000 On Wed, Jun 29, 2011 at 11:29 PM, Kurt Touet wrote: > I have an admittedly odd zfs v28 array configuration under stable/8 r2234= 84: > > # zpool status storage > =A0pool: storage > =A0state: ONLINE > =A0scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:46 = 2011 > config: > > =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM > =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0raidz1-0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0mirror-1 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 > =A0 =A0 =A0 =A0 =A0 =A0ad20 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > =A0 =A0 =A0 =A0 =A0 =A0ad18 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 > > This was simply due to the need to expand the size of the original > raidz1 only array and constraints within the box. =A0All drives in the > box _were_ 1.5TB. =A0I had a drive in the mirror die this week, and I > had 2 spare 2TB drives on hand. =A0So, I decided to replace both of the > 1.5TB drives in the array with 2TB drives (and free up a little more > space on the box). =A0However, after replacing both drives, the array > did not expand in size. =A0It still acts as if the mirror contains 1.5TB > drives: > > storage =A0 =A0 6.28T =A0 548G > =A0raidz1 =A0 =A05.07T =A0 399G > =A0mirror =A0 =A01.21T =A0 150G > > Is this normal behaviour? =A0It was my understanding that zfs > automatically adapted to having additional drive space in vdevs. You still have to set 'autoexpand' property on the pool in order for expansion to happen. Perevious versions would expand the pool on re-import or on boot. --Artem > > -kurt > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 07:54:24 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 25088106566B; Thu, 30 Jun 2011 07:54:24 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 8B5C88FC13; Thu, 30 Jun 2011 07:54:23 +0000 (UTC) Received: by wyg24 with SMTP id 24so1862146wyg.13 for ; Thu, 30 Jun 2011 00:54:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=NIps1ayIpzQvYBiNz9uVRD7cwqt+HibFRWxH4j9oDgQ=; b=NrLY2uupHpOH89C/fAUlVgToNrEBxnqi5EePKTcrHoeqpbd4CmWchX8FeJgkUHDyU3 hX7GQLMEzHFO+jeLMOn6qaX4PeaiIL1do0iH+qzTQg6kUo44szURl/nO+pvostIl21Ps 9INiGFw6KcayfemtI99g22gwqQv6JeMAtiW6Q= MIME-Version: 1.0 Received: by 10.217.3.80 with SMTP id q58mr889915wes.53.1309420462279; Thu, 30 Jun 2011 00:54:22 -0700 (PDT) Received: by 10.216.163.148 with HTTP; Thu, 30 Jun 2011 00:54:22 -0700 (PDT) In-Reply-To: References: Date: Thu, 30 Jun 2011 01:54:22 -0600 Message-ID: From: Kurt Touet To: Artem Belevich Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 array doesn't expand with larger disks in mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 07:54:24 -0000 Thanks for that info Artem. I have now set that property, exported/imported, and rebooted to no avail. Is this something that needed to be set ahead of time? Thanks, -kurt On Thu, Jun 30, 2011 at 1:03 AM, Artem Belevich wrote: > On Wed, Jun 29, 2011 at 11:29 PM, Kurt Touet wrote: >> I have an admittedly odd zfs v28 array configuration under stable/8 r223= 484: >> >> # zpool status storage >> =A0pool: storage >> =A0state: ONLINE >> =A0scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:46= 2011 >> config: >> >> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >> =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0raidz1-0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0mirror-1 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad20 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad18 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> >> This was simply due to the need to expand the size of the original >> raidz1 only array and constraints within the box. =A0All drives in the >> box _were_ 1.5TB. =A0I had a drive in the mirror die this week, and I >> had 2 spare 2TB drives on hand. =A0So, I decided to replace both of the >> 1.5TB drives in the array with 2TB drives (and free up a little more >> space on the box). =A0However, after replacing both drives, the array >> did not expand in size. =A0It still acts as if the mirror contains 1.5TB >> drives: >> >> storage =A0 =A0 6.28T =A0 548G >> =A0raidz1 =A0 =A05.07T =A0 399G >> =A0mirror =A0 =A01.21T =A0 150G >> >> Is this normal behaviour? =A0It was my understanding that zfs >> automatically adapted to having additional drive space in vdevs. > > You still have to set 'autoexpand' property on the pool in order for > expansion to happen. Perevious versions would expand the pool on > re-import or on boot. > > --Artem > >> >> -kurt >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 15:14:52 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0329B106566B for ; Thu, 30 Jun 2011 15:14:52 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id 8ECF38FC12 for ; Thu, 30 Jun 2011 15:14:51 +0000 (UTC) Received: by wwe6 with SMTP id 6so2319677wwe.31 for ; Thu, 30 Jun 2011 08:14:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=lrIw8j8aWiJksUjMly7E5hCiYuMyoUvDWosKNuPZfKk=; b=lP9HLaL4WLwAiM3+Zohq4R9PsKY2VQjs3HS31OeeUp6WxqkqA0JyRZTMaRyUnMYjOG UofczKq0INx/+OlDole2Ee7JhamhR224W0dBwnrlDx63r1fzzgcmAGf+qxuzhVmtffhb XjyDXH6g7b7eZwHNIIlCxITPGTvdWuiuoAsns= MIME-Version: 1.0 Received: by 10.216.28.1 with SMTP id f1mr1963934wea.41.1309446890219; Thu, 30 Jun 2011 08:14:50 -0700 (PDT) Sender: artemb@gmail.com Received: by 10.216.135.169 with HTTP; Thu, 30 Jun 2011 08:14:50 -0700 (PDT) In-Reply-To: References: Date: Thu, 30 Jun 2011 08:14:50 -0700 X-Google-Sender-Auth: KmDSwk-VbTuKwpanS4Ja6E8lMu0 Message-ID: From: Artem Belevich To: Kurt Touet Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 array doesn't expand with larger disks in mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 15:14:52 -0000 On Thu, Jun 30, 2011 at 12:54 AM, Kurt Touet wrote: > Thanks for that info Artem. =A0I have now set that property, > exported/imported, and rebooted to no avail. =A0Is this something that > needed to be set ahead of time? I guess autoexpand property only matter on disk change and does not work retroactively. Try "zpool online -e" --Artem > > Thanks, > -kurt > > On Thu, Jun 30, 2011 at 1:03 AM, Artem Belevich wrote: >> On Wed, Jun 29, 2011 at 11:29 PM, Kurt Touet wrote: >>> I have an admittedly odd zfs v28 array configuration under stable/8 r22= 3484: >>> >>> # zpool status storage >>> =A0pool: storage >>> =A0state: ONLINE >>> =A0scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:4= 6 2011 >>> config: >>> >>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >>> =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >>> =A0 =A0 =A0 =A0 =A0raidz1-0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0mirror-1 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad20 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad18 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> >>> This was simply due to the need to expand the size of the original >>> raidz1 only array and constraints within the box. =A0All drives in the >>> box _were_ 1.5TB. =A0I had a drive in the mirror die this week, and I >>> had 2 spare 2TB drives on hand. =A0So, I decided to replace both of the >>> 1.5TB drives in the array with 2TB drives (and free up a little more >>> space on the box). =A0However, after replacing both drives, the array >>> did not expand in size. =A0It still acts as if the mirror contains 1.5T= B >>> drives: >>> >>> storage =A0 =A0 6.28T =A0 548G >>> =A0raidz1 =A0 =A05.07T =A0 399G >>> =A0mirror =A0 =A01.21T =A0 150G >>> >>> Is this normal behaviour? =A0It was my understanding that zfs >>> automatically adapted to having additional drive space in vdevs. >> >> You still have to set 'autoexpand' property on the pool in order for >> expansion to happen. Perevious versions would expand the pool on >> re-import or on boot. >> >> --Artem >> >>> >>> -kurt >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >> > From owner-freebsd-fs@FreeBSD.ORG Thu Jun 30 19:27:53 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 471E2106566B; Thu, 30 Jun 2011 19:27:53 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id A5B2D8FC12; Thu, 30 Jun 2011 19:27:52 +0000 (UTC) Received: by wyg24 with SMTP id 24so2444048wyg.13 for ; Thu, 30 Jun 2011 12:27:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=t3NaKqSvOxzb96i69BHrzv9b5F+ndEt9+wgCjKC4DuM=; b=fRDxsp5lqX6BTIti8A7gR/4UBENVQriktUZ0hAs3/pn3sZxdxq9f7blHHxI9T2fQHC oPLs+e4Ii4G0h8mL9fskmPX/xKB3T27eGJG8EVEovbD+G7PePYyFtwQIRV5MNOgl+YB6 PFTPwgTPzw4DBVQsK+67V60vLwX+QeWofSkyM= MIME-Version: 1.0 Received: by 10.216.173.14 with SMTP id u14mr399545wel.38.1309462071445; Thu, 30 Jun 2011 12:27:51 -0700 (PDT) Received: by 10.216.163.148 with HTTP; Thu, 30 Jun 2011 12:27:51 -0700 (PDT) In-Reply-To: References: Date: Thu, 30 Jun 2011 13:27:51 -0600 Message-ID: From: Kurt Touet To: Artem Belevich Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 array doesn't expand with larger disks in mirror X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2011 19:27:53 -0000 #zpool online -e storage ad20 #zpool online -e storage ad18 storage 6.33T 958G raidz1 5.10T 363G mirror 1.23T 595G Worked like a charm! Thanks for the help, -kurt On Thu, Jun 30, 2011 at 9:14 AM, Artem Belevich wrote: > On Thu, Jun 30, 2011 at 12:54 AM, Kurt Touet wrote: >> Thanks for that info Artem. =A0I have now set that property, >> exported/imported, and rebooted to no avail. =A0Is this something that >> needed to be set ahead of time? > > I guess autoexpand property only matter on disk change and does not > work retroactively. Try "zpool online -e" > > --Artem > >> >> Thanks, >> -kurt >> >> On Thu, Jun 30, 2011 at 1:03 AM, Artem Belevich wrote: >>> On Wed, Jun 29, 2011 at 11:29 PM, Kurt Touet wrote: >>>> I have an admittedly odd zfs v28 array configuration under stable/8 r2= 23484: >>>> >>>> # zpool status storage >>>> =A0pool: storage >>>> =A0state: ONLINE >>>> =A0scan: resilvered 1.21T in 10h50m with 0 errors on Wed Jun 29 23:21:= 46 2011 >>>> config: >>>> >>>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >>>> =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 >>>> =A0 =A0 =A0 =A0 =A0raidz1-0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> =A0 =A0 =A0 =A0 =A0mirror-1 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad20 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> =A0 =A0 =A0 =A0 =A0 =A0ad18 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>>> >>>> This was simply due to the need to expand the size of the original >>>> raidz1 only array and constraints within the box. =A0All drives in the >>>> box _were_ 1.5TB. =A0I had a drive in the mirror die this week, and I >>>> had 2 spare 2TB drives on hand. =A0So, I decided to replace both of th= e >>>> 1.5TB drives in the array with 2TB drives (and free up a little more >>>> space on the box). =A0However, after replacing both drives, the array >>>> did not expand in size. =A0It still acts as if the mirror contains 1.5= TB >>>> drives: >>>> >>>> storage =A0 =A0 6.28T =A0 548G >>>> =A0raidz1 =A0 =A05.07T =A0 399G >>>> =A0mirror =A0 =A01.21T =A0 150G >>>> >>>> Is this normal behaviour? =A0It was my understanding that zfs >>>> automatically adapted to having additional drive space in vdevs. >>> >>> You still have to set 'autoexpand' property on the pool in order for >>> expansion to happen. Perevious versions would expand the pool on >>> re-import or on boot. >>> >>> --Artem >>> >>>> >>>> -kurt >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>> >>> >> >