From owner-freebsd-fs@FreeBSD.ORG Sun Sep 20 22:22:55 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1C586106566B for ; Sun, 20 Sep 2009 22:22:55 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-yw0-f199.google.com (mail-yw0-f199.google.com [209.85.211.199]) by mx1.freebsd.org (Postfix) with ESMTP id D181C8FC12 for ; Sun, 20 Sep 2009 22:22:54 +0000 (UTC) Received: by ywh37 with SMTP id 37so2854451ywh.28 for ; Sun, 20 Sep 2009 15:22:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=YI/+B76HedaPioctwmcvygBwY1ODGsbc6VNixFMjPak=; b=g1ibe9Dldq5ia1Xr+A10YVYED1CvlUlIixqFplWbTKcB29hGxNFivG7UFZ4YEyI+R7 MUzFPAaND26TQGN0fmhdYTqH66yylZqcvJ9gcMMY7vdDzi/83nLFlMbeej7ISmOGG+6G EVALZ1KqZ7d0XD8St1SXzIuVZPj4j3op1s/oc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=MmK2n1MgC8gjQJpYyF+ehH5NY/iTiByuwglQ1d9TEu9CC/AbjpGnhU0B749EbHrzob /7/5mLlne9lbJ4KWYPnhyguh0kzzSbpnrRM4mR28O4FJixyiLtfMxrb5xK8pNYC+beDR ZN4/42pyBlyq6hGvKqsuWA5dE5yD7rbHy7r6U= MIME-Version: 1.0 Received: by 10.90.13.15 with SMTP id 15mr2828487agm.74.1253484010343; Sun, 20 Sep 2009 15:00:10 -0700 (PDT) Date: Sun, 20 Sep 2009 16:00:10 -0600 Message-ID: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> From: Kurt Touet To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Sep 2009 22:22:55 -0000 I am using ZFS pool based on a 4-drive raidz1 setup for storage. I believe that one of the drives is failing, and I'd like to remove/replace it. The drive has been causing some issues (such as becoming non-responsive and hanging the system with timeouts), so I'd like to offline it, and then run in degraded mode until I can grab a new drive (tomorrow). However, when I disconnected the drive (pulled the plug, not using a zpool offline command), the following occurred: NAME STATE READ WRITE CKSUM storage FAULTED 0 0 1 raidz1 DEGRADED 0 0 0 ad14 ONLINE 0 0 0 ad6 UNAVAIL 0 0 0 ad12 ONLINE 0 0 0 ad4 ONLINE 0 0 0 Note: That's my recreation of the output... not the actual text. At this point, I was unable to to do anything with the pool... and all data was inaccessible. Fortunately, the after sitting pulled for a bit, I tried putting the failing drive back into the array, and it booted properly. Of course, I still want to replace it, but this is what happens when I try to take it offline: monolith# zpool status storage pool: storage state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad14 ONLINE 0 0 0 ad6 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad4 ONLINE 0 0 0 errors: No known data errors monolith# zpool offline storage ad6 cannot offline ad6: no valid replicas monolith# uname -a FreeBSD monolith 8.0-RC1 FreeBSD 8.0-RC1 #2 r197370: Sun Sep 20 15:32:08 CST 2009 k@monolith:/usr/obj/usr/src/sys/MONOLITH amd64 If the array is online and healthy, why can't I simply offline a drive and then replace it afterwards? Any thoughts? Also, how does a degraded raidz1 array end up faulting the entire pool? Thanks, -kurt From owner-freebsd-fs@FreeBSD.ORG Mon Sep 21 11:06:22 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6DDAE106568D for ; Mon, 21 Sep 2009 11:06:22 +0000 (UTC) (envelope-from aaron@goflexitllc.com) Received: from mail.goflexitllc.com (mail.goflexitllc.com [70.38.81.12]) by mx1.freebsd.org (Postfix) with ESMTP id 175288FC14 for ; Mon, 21 Sep 2009 11:06:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=goflexitllc.com; h=message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; s=gamma; bh=tJAXRL8a0UFmVuykMkwP/jTMP CY=; b=iXGS79H7Jq+HYBFzfHxQbdT3tnyp4eyINhVs5UEYeM5T1yjgFioVKR0G8 EHQxyz5BTbyFYd31xaWFsQmlgD528+fd2N2ttOriMweOxTywPGjNpLxzGQ+9dkz/ Q4E6Mr4 DomainKey-Signature: a=rsa-sha1; c=nofws; d=goflexitllc.com; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to: content-type; q=dns; s=gamma; b=KApVwUS7eZd4KuQsIMyWdHJCYreCtcxV yggg7iiDDkHV66yi9FLZ6UumT/4ccS3tBqLDJvzGDAhxKClP0DmyKkLrXCNo7Gzu nZ0nCdSjgfr8K7c15MurGp0pdvJEBKEX Received: (qmail 21572 invoked by uid 89); 21 Sep 2009 10:42:53 -0000 Received: (simscan 1.4.1 ppid 21548 pid 21554 t 0.2630s) (scanners: regex: 1.4.1 attach: 1.4.1 clamav: 0.95.1/m:); 21 Sep 0109 10:42:52 -0000 DomainKey-Status: no signature X-Originating-IP: 69.27.151.4 Received: from temp4.wavelinx.net (HELO ?172.16.1.128?) (aaron@goflexitllc.com@69.27.151.4) by mail.goflexitllc.com with ESMTPA; 21 Sep 2009 10:42:51 -0000 Message-ID: <4AB757E4.5060501@goflexitllc.com> Date: Mon, 21 Sep 2009 05:39:32 -0500 From: Aaron Hurt User-Agent: Thunderbird 2.0.0.22 (X11/20090719) MIME-Version: 1.0 To: Kurt Touet References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> In-Reply-To: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> Content-Type: multipart/mixed; boundary="------------080102010205030309010709" X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Sep 2009 11:06:22 -0000 This is a multi-part message in MIME format. --------------080102010205030309010709 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Kurt Touet wrote: > I am using ZFS pool based on a 4-drive raidz1 setup for storage. I > believe that one of the drives is failing, and I'd like to > remove/replace it. The drive has been causing some issues (such as > becoming non-responsive and hanging the system with timeouts), so I'd > like to offline it, and then run in degraded mode until I can grab a > new drive (tomorrow). However, when I disconnected the drive (pulled > the plug, not using a zpool offline command), the following occurred: > > NAME STATE READ WRITE CKSUM > storage FAULTED 0 0 1 > raidz1 DEGRADED 0 0 0 > ad14 ONLINE 0 0 0 > ad6 UNAVAIL 0 0 0 > ad12 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > > Note: That's my recreation of the output... not the actual text. > > At this point, I was unable to to do anything with the pool... and all > data was inaccessible. Fortunately, the after sitting pulled for a > bit, I tried putting the failing drive back into the array, and it > booted properly. Of course, I still want to replace it, but this is > what happens when I try to take it offline: > > monolith# zpool status storage > pool: storage > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad14 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > ad12 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > > errors: No known data errors > monolith# zpool offline storage ad6 > cannot offline ad6: no valid replicas > monolith# uname -a > FreeBSD monolith 8.0-RC1 FreeBSD 8.0-RC1 #2 r197370: Sun Sep 20 > 15:32:08 CST 2009 k@monolith:/usr/obj/usr/src/sys/MONOLITH amd64 > > If the array is online and healthy, why can't I simply offline a drive > and then replace it afterwards? Any thoughts? Also, how does a > degraded raidz1 array end up faulting the entire pool? > > Thanks, > -kurt > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > > !DSPAM:2,4ab6ac55126167777521459! > > I'm not sure why it would be giving you that message. In a raidz1 you should be able to sustain one failure. The only thing that comes to mind this early in the morning would be that somehow your data replication across your discs isn't totally in sync. I would suggest you try a scrub and then see if you can remove the drive afterwards. Aaron Hurt Managing Partner Flex I.T., LLC 611 Commerce Street Suite 3117 Nashville, TN 37203 Phone: 615.438.7101 E-mail: aaron@goflexitllc.com --------------080102010205030309010709-- From owner-freebsd-fs@FreeBSD.ORG Mon Sep 21 11:06:54 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 88671106568B for ; Mon, 21 Sep 2009 11:06:54 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 7656E8FC17 for ; Mon, 21 Sep 2009 11:06:54 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8LB6sla030233 for ; Mon, 21 Sep 2009 11:06:54 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8LB6rRe030229 for freebsd-fs@FreeBSD.org; Mon, 21 Sep 2009 11:06:53 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 21 Sep 2009 11:06:53 GMT Message-Id: <200909211106.n8LB6rRe030229@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-fs@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-fs@FreeBSD.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Sep 2009 11:06:54 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/138790 fs [zfs] ZFS ceases caching when mem demand is high o kern/138524 fs [msdosfs] disks and usb flashes/cards with Russian lab o kern/138421 fs [ufs] [patch] remove UFS label limitations o kern/138367 fs [tmpfs] [panic] 'panic: Assertion pages > 0 failed' wh o kern/138202 fs mount_msdosfs(1) see only 2Gb o kern/138109 fs [extfs] [patch] Minor cleanups to the sys/gnu/fs/ext2f f kern/137037 fs [zfs] [hang] zfs rollback on root causes FreeBSD to fr o kern/136968 fs [ufs] [lor] ufs/bufwait/ufs (open) o kern/136945 fs [ufs] [lor] filedesc structure/ufs (poll) o kern/136944 fs [ffs] [lor] bufwait/snaplk (fsync) o kern/136873 fs [ntfs] Missing directories/files on NTFS volume o kern/136865 fs [nfs] [patch] NFS exports atomic and on-the-fly atomic o kern/136470 fs [nfs] Cannot mount / in read-only, over NFS o kern/135594 fs [zfs] Single dataset unresponsive with Samba o kern/135546 fs [zfs] zfs.ko module doesn't ignore zpool.cache filenam o kern/135469 fs [ufs] [panic] kernel crash on md operation in ufs_dirb o bin/135314 fs [zfs] assertion failed for zdb(8) usage o kern/135050 fs [zfs] ZFS clears/hides disk errors on reboot f kern/134496 fs [zfs] [panic] ZFS pool export occasionally causes a ke o kern/134491 fs [zfs] Hot spares are rather cold... o kern/133980 fs [panic] [ffs] panic: ffs_valloc: dup alloc o kern/133676 fs [smbfs] [panic] umount -f'ing a vnode-based memory dis o kern/133614 fs [smbfs] [panic] panic: ffs_truncate: read-only filesys o kern/133373 fs [zfs] umass attachment causes ZFS checksum errors, dat o kern/133174 fs [msdosfs] [patch] msdosfs must support utf-encoded int f kern/133150 fs [zfs] Page fault with ZFS on 7.1-RELEASE/amd64 while w o kern/132960 fs [ufs] [panic] panic:ffs_blkfree: freeing free frag o kern/132597 fs [tmpfs] [panic] tmpfs-related panic while interrupting o kern/132397 fs reboot causes filesystem corruption (failure to sync b o kern/132331 fs [ufs] [lor] LOR ufs and syncer o kern/132237 fs [msdosfs] msdosfs has problems to read MSDOS Floppy o kern/132145 fs [panic] File System Hard Crashes o kern/131995 fs [nfs] Failure to mount NFSv4 server o kern/131441 fs [unionfs] [nullfs] unionfs and/or nullfs not combineab o kern/131360 fs [nfs] poor scaling behavior of the NFS server under lo o kern/131342 fs [nfs] mounting/unmounting of disks causes NFS to fail o bin/131341 fs makefs: error "Bad file descriptor" on the mount poin o kern/131086 fs [ext2fs] [patch] mkfs.ext2 creates rotten partition o kern/130979 fs [smbfs] [panic] boot/kernel/smbfs.ko o kern/130920 fs [msdosfs] cp(1) takes 100% CPU time while copying file o kern/130229 fs [iconv] usermount fails on fs that need iconv o kern/130210 fs [nullfs] Error by check nullfs o kern/129760 fs [nfs] after 'umount -f' of a stale NFS share FreeBSD l o kern/129488 fs [smbfs] Kernel "bug" when using smbfs in smbfs_smb.c: o kern/129231 fs [ufs] [patch] New UFS mount (norandom) option - mostly o kern/129152 fs [panic] non-userfriendly panic when trying to mount(8) o kern/129059 fs [zfs] [patch] ZFS bootloader whitelistable via WITHOUT f kern/128829 fs smbd(8) causes periodic panic on 7-RELEASE o kern/128633 fs [zfs] [lor] lock order reversal in zfs f kern/128173 fs [ext2fs] ls gives "Input/output error" on mounted ext3 o kern/127659 fs [tmpfs] tmpfs memory leak o kern/127420 fs [gjournal] [panic] Journal overflow on gmirrored gjour o kern/127213 fs [tmpfs] sendfile on tmpfs data corruption o kern/127029 fs [panic] mount(8): trying to mount a write protected zi o kern/126287 fs [ufs] [panic] Kernel panics while mounting an UFS file s kern/125738 fs [zfs] [request] SHA256 acceleration in ZFS f kern/125536 fs [ext2fs] ext 2 mounts cleanly but fails on commands li f kern/124621 fs [ext3] [patch] Cannot mount ext2fs partition f bin/124424 fs [zfs] zfs(8): zfs list -r shows strange snapshots' siz o kern/123939 fs [msdosfs] corrupts new files o kern/122888 fs [zfs] zfs hang w/ prefetch on, zil off while running t o kern/122380 fs [ffs] ffs_valloc:dup alloc (Soekris 4801/7.0/USB Flash o bin/122172 fs [fs]: amd(8) automount daemon dies on 6.3-STABLE i386, o kern/122047 fs [ext2fs] [patch] incorrect handling of UF_IMMUTABLE / o kern/122038 fs [tmpfs] [panic] tmpfs: panic: tmpfs_alloc_vp: type 0xc o bin/121898 fs [nullfs] pwd(1)/getcwd(2) fails with Permission denied o bin/121779 fs [ufs] snapinfo(8) (and related tools?) only work for t o bin/121366 fs [zfs] [patch] Automatic disk scrubbing from periodic(8 o bin/121072 fs [smbfs] mount_smbfs(8) cannot normally convert the cha f kern/120991 fs [panic] [fs] [snapshot] System crashes when manipulati o kern/120483 fs [ntfs] [patch] NTFS filesystem locking changes o kern/120482 fs [ntfs] [patch] Sync style changes between NetBSD and F f kern/119735 fs [zfs] geli + ZFS + samba starting on boot panics 7.0-B o kern/118912 fs [2tb] disk sizing/geometry problem with large array o kern/118713 fs [minidump] [patch] Display media size required for a k o bin/118249 fs mv(1): moving a directory changes its mtime o kern/118107 fs [ntfs] [panic] Kernel panic when accessing a file at N o bin/117315 fs [smbfs] mount_smbfs(8) and related options can't mount o kern/117314 fs [ntfs] Long-filename only NTFS fs'es cause kernel pani o kern/117158 fs [zfs] zpool scrub causes panic if geli vdevs detach on o bin/116980 fs [msdosfs] [patch] mount_msdosfs(8) resets some flags f o kern/116913 fs [ffs] [panic] ffs_blkfree: freeing free block p kern/116608 fs [msdosfs] [patch] msdosfs fails to check mount options o kern/116583 fs [ffs] [hang] System freezes for short time when using o kern/116170 fs [panic] Kernel panic when mounting /tmp o kern/115645 fs [snapshots] [panic] lockmgr: thread 0xc4c00d80, not ex o bin/115361 fs [zfs] mount(8) gets into a state where it won't set/un o kern/114955 fs [cd9660] [patch] [request] support for mask,dirmask,ui o kern/114847 fs [ntfs] [patch] [request] dirmask support for NTFS ala o kern/114676 fs [ufs] snapshot creation panics: snapacct_ufs2: bad blo o bin/114468 fs [patch] [request] add -d option to umount(8) to detach o kern/113852 fs [smbfs] smbfs does not properly implement DFS referral o bin/113838 fs [patch] [request] mount(8): add support for relative p o bin/113049 fs [patch] [request] make quot(8) use getopt(3) and show o kern/112658 fs [smbfs] [patch] smbfs and caching problems (resolves b o usb/112640 fs [ext2fs] [hang] Kernel freezes when writing a file to o kern/111843 fs [msdosfs] Long Names of files are incorrectly created o kern/111782 fs [ufs] dump(8) fails horribly for large filesystems s bin/111146 fs [2tb] fsck(8) fails on 6T filesystem o kern/109024 fs [msdosfs] mount_msdosfs: msdosfs_iconv: Operation not o kern/109010 fs [msdosfs] can't mv directory within fat32 file system o bin/107829 fs [2TB] fdisk(8): invalid boundary checking in fdisk / w o kern/106030 fs [ufs] [panic] panic in ufs from geom when a dead disk o kern/105093 fs [ext2fs] [patch] ext2fs on read-only media cannot be m o kern/104406 fs [ufs] Processes get stuck in "ufs" state under persist o kern/104133 fs [ext2fs] EXT2FS module corrupts EXT2/3 filesystems o kern/103035 fs [ntfs] Directories in NTFS mounted disc images appear o kern/101324 fs [smbfs] smbfs sometimes not case sensitive when it's s o kern/99290 fs [ntfs] mount_ntfs ignorant of cluster sizes o kern/97377 fs [ntfs] [patch] syntax cleanup for ntfs_ihash.c o kern/95222 fs [iso9660] File sections on ISO9660 level 3 CDs ignored o kern/94849 fs [ufs] rename on UFS filesystem is not atomic o kern/94769 fs [ufs] Multiple file deletions on multi-snapshotted fil o kern/94733 fs [smbfs] smbfs may cause double unlock o kern/93942 fs [vfs] [patch] panic: ufs_dirbad: bad dir (patch from D o kern/92272 fs [ffs] [hang] Filling a filesystem while creating a sna f kern/91568 fs [ufs] [panic] writing to UFS/softupdates DVD media in o kern/91134 fs [smbfs] [patch] Preserve access and modification time a kern/90815 fs [smbfs] [patch] SMBFS with character conversions somet o kern/89991 fs [ufs] softupdates with mount -ur causes fs UNREFS o kern/88657 fs [smbfs] windows client hang when browsing a samba shar o kern/88266 fs [smbfs] smbfs does not implement UIO_NOCOPY and sendfi o kern/87859 fs [smbfs] System reboot while umount smbfs. o kern/86587 fs [msdosfs] rm -r /PATH fails with lots of small files o kern/85326 fs [smbfs] [panic] saving a file via samba to an overquot o kern/84589 fs [2TB] 5.4-STABLE unresponsive during background fsck 2 o kern/80088 fs [smbfs] Incorrect file time setting on NTFS mounted vi o kern/77826 fs [ext2fs] ext2fs usb filesystem will not mount RW o kern/73484 fs [ntfs] Kernel panic when doing `ls` from the client si o bin/73019 fs [ufs] fsck_ufs(8) cannot alloc 607016868 bytes for ino o kern/71774 fs [ntfs] NTFS cannot "see" files on a WinXP filesystem o kern/68978 fs [panic] [ufs] crashes with failing hard disk, loose po o kern/65920 fs [nwfs] Mounted Netware filesystem behaves strange o kern/65901 fs [smbfs] [patch] smbfs fails fsx write/truncate-down/tr o kern/61503 fs [smbfs] mount_smbfs does not work as non-root o kern/55617 fs [smbfs] Accessing an nsmb-mounted drive via a smb expo o kern/51685 fs [hang] Unbounded inode allocation causes kernel to loc o kern/51583 fs [nullfs] [patch] allow to work with devices and socket o kern/36566 fs [smbfs] System reboot with dead smb mount and umount o kern/18874 fs [2TB] 32bit NFS servers export wrong negative values t 140 problems total. From owner-freebsd-fs@FreeBSD.ORG Mon Sep 21 17:21:46 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 643181065679 for ; Mon, 21 Sep 2009 17:21:46 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-yw0-f178.google.com (mail-yw0-f178.google.com [209.85.211.178]) by mx1.freebsd.org (Postfix) with ESMTP id 1F6CA8FC14 for ; Mon, 21 Sep 2009 17:21:45 +0000 (UTC) Received: by ywh8 with SMTP id 8so3787579ywh.14 for ; Mon, 21 Sep 2009 10:21:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=WuWU7nWRa/yyStfgsl/DdUhBGOi94kChwctX7n3jYzo=; b=P2haug4c5/KCp47+O/iSCY2BIpv7Ldtb0/vEvtJpD93JGMa7nArHe8WVdAlLuMgdsI dmzPBr0S6hUyEXsW2B8GgWgcT5NLwVncQPLnjEXF40RWYhA8Z1wrcbWXwx+mo3HKyXdF x7fLPra8aAx8AhQ04ru58gwUADz/wKbhRXS9Q= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=xkredB7czGEz3hbCMOB3Hl0NnZ8hEj/BRvKFVxo0A4+4+Yu9Dk7DvneBBsveB5m3pZ M+IyZc2lz9gT9+F5qMpzOuLpX2jqLyDb3TRVNNqcZi1xZM174qH6inJ9owkXk7gFG+O6 u/qzrar8y0oRR/MYhY3dY8bMREjIshFwt7h4A= MIME-Version: 1.0 Received: by 10.91.22.6 with SMTP id z6mr3424606agi.65.1253553705272; Mon, 21 Sep 2009 10:21:45 -0700 (PDT) In-Reply-To: <4AB757E4.5060501@goflexitllc.com> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <4AB757E4.5060501@goflexitllc.com> Date: Mon, 21 Sep 2009 11:21:45 -0600 Message-ID: <2a5e326f0909211021o431ef53bh3077589efb0bed6c@mail.gmail.com> From: Kurt Touet To: Aaron Hurt Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Sep 2009 17:21:46 -0000 I thought about that possibility as well.. but I had scrubbed the array within 10 days. I'll give it a shot again today and see if that brings up any other errors (or allows me to offline the drive afterwards). Cheers, -kurt On Mon, Sep 21, 2009 at 4:39 AM, Aaron Hurt wrote: > Kurt Touet wrote: >> >> I am using ZFS pool based on a 4-drive raidz1 setup for storage. =A0I >> believe that one of the drives is failing, and I'd like to >> remove/replace it. =A0The drive has been causing some issues (such as >> becoming non-responsive and hanging the system with timeouts), so I'd >> like to offline it, and then run in degraded mode until I can grab a >> new drive (tomorrow). =A0However, when I disconnected the drive (pulled >> the plug, not using a zpool offline command), the following occurred: >> >> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >> =A0 =A0 =A0 =A0storage =A0 =A0 FAULTED =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 1 >> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0DEGRADED =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 UNAVAIL =A0 =A0 =A00 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> >> Note: That's my recreation of the output... not the actual text. >> >> At this point, I was unable to to do anything with the pool... and all >> data was inaccessible. =A0Fortunately, the after sitting pulled for a >> bit, I tried putting the failing drive back into the array, and it >> booted properly. =A0Of course, I still want to replace it, but this is >> what happens when I try to take it offline: >> >> monolith# zpool status storage >> =A0pool: storage >> =A0state: ONLINE >> =A0scrub: none requested >> config: >> >> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >> =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> >> errors: No known data errors >> monolith# zpool offline storage ad6 >> cannot offline ad6: no valid replicas >> monolith# uname -a >> FreeBSD monolith 8.0-RC1 FreeBSD 8.0-RC1 #2 r197370: Sun Sep 20 >> 15:32:08 CST 2009 =A0 =A0 k@monolith:/usr/obj/usr/src/sys/MONOLITH =A0am= d64 >> >> If the array is online and healthy, why can't I simply offline a drive >> and then replace it afterwards? =A0Any thoughts? =A0 Also, how does a >> degraded raidz1 array end up faulting the entire pool? >> >> Thanks, >> -kurt >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> >> !DSPAM:2,4ab6ac55126167777521459! >> >> > > I'm not sure why it would be giving you that message. =A0In a raidz1 you > should be able to sustain one failure. =A0The only thing that comes to mi= nd > this early in the morning would be that somehow your data replication acr= oss > your discs isn't totally in sync. =A0I would suggest you try a scrub and = then > see if you can remove the drive afterwards. > > Aaron Hurt > Managing Partner > Flex I.T., LLC > 611 Commerce Street > Suite 3117 > Nashville, TN =A037203 > Phone: 615.438.7101 > E-mail: aaron@goflexitllc.com > > From owner-freebsd-fs@FreeBSD.ORG Mon Sep 21 17:44:29 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 62EBE1065670 for ; Mon, 21 Sep 2009 17:44:29 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-yw0-f178.google.com (mail-yw0-f178.google.com [209.85.211.178]) by mx1.freebsd.org (Postfix) with ESMTP id 1B98C8FC17 for ; Mon, 21 Sep 2009 17:44:28 +0000 (UTC) Received: by ywh8 with SMTP id 8so3816201ywh.14 for ; Mon, 21 Sep 2009 10:44:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=e3NIuXTvlMABFbOHLxkJaRtePnNIYN0r0r6O6S3AVKU=; b=ipPTiHsZt0nbhVZgvjMAMdVMWuGuUhvuj+Dzo/GnXyXWWL3h8py/sVqMn0dYJx4rIz mO5yNdtmm7Zur3YXl4F+TN05CO+O+5uRv2UA37sR87BrvsgxKDTNpe3a4xCoSrXl37EH mQYRNPh3mQQensuMTs4E3qD2gPhMeFg4fcy8Q= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=KH4LG6hHNcAedJnDM2BWl4g/PWSsnXC5TbJanoflkHCFU9+08ggf53UrNN/2z0HNK2 iZVLljl0ipw7bMHHFfUUI3mDfsLdyAT4F7fgXzqvyCB2VwEMUnJ7SO6uhH+k9jhSPQW7 uT0bferQP8iD6bZRbhB6UhGi3/eVQVncbmFF8= MIME-Version: 1.0 Received: by 10.91.189.1 with SMTP id r1mr3441527agp.109.1253555067040; Mon, 21 Sep 2009 10:44:27 -0700 (PDT) In-Reply-To: <2a5e326f0909211021o431ef53bh3077589efb0bed6c@mail.gmail.com> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <4AB757E4.5060501@goflexitllc.com> <2a5e326f0909211021o431ef53bh3077589efb0bed6c@mail.gmail.com> Date: Mon, 21 Sep 2009 11:44:26 -0600 Message-ID: <2a5e326f0909211044k349d6bc1lb9bd9094e7216e41@mail.gmail.com> From: Kurt Touet To: Aaron Hurt Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Sep 2009 17:44:29 -0000 Apparently you were right Aaron: monolith# zpool scrub storage monolith# zpool status storage pool: storage state: ONLINE scrub: resilver completed after 0h1m with 0 errors on Mon Sep 21 11:37:24 = 2009 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad14 ONLINE 0 0 0 1.46M resilvered ad6 ONLINE 0 0 0 2K resilvered ad12 ONLINE 0 0 0 3K resilvered ad4 ONLINE 0 0 0 3K resilvered errors: No known data errors monolith# zpool offline storage ad6 monolith# zpool online storage ad6 monolith# zpool status storage pool: storage state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Mon Sep 21 11:40:12 = 2009 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad14 ONLINE 0 0 0 67.5K resilvered ad6 ONLINE 0 0 0 671K resilvered ad12 ONLINE 0 0 0 67.5K resilvered ad4 ONLINE 0 0 0 53K resilvered errors: No known data errors I wonder then, with the storage array reporting itself as healthy, how did it know that one drive had desynced data, and why wouldn't that have shown up as an error like DEGRADED? Cheers, -kurt On Mon, Sep 21, 2009 at 11:21 AM, Kurt Touet wrote: > I thought about that possibility as well.. but I had scrubbed the > array within 10 days. I'll give it a shot again today and see if that > brings up any other errors (or allows me to offline the drive > afterwards). > > Cheers, > -kurt > > On Mon, Sep 21, 2009 at 4:39 AM, Aaron Hurt wrote= : >> Kurt Touet wrote: >>> >>> I am using ZFS pool based on a 4-drive raidz1 setup for storage. =A0I >>> believe that one of the drives is failing, and I'd like to >>> remove/replace it. =A0The drive has been causing some issues (such as >>> becoming non-responsive and hanging the system with timeouts), so I'd >>> like to offline it, and then run in degraded mode until I can grab a >>> new drive (tomorrow). =A0However, when I disconnected the drive (pulled >>> the plug, not using a zpool offline command), the following occurred: >>> >>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >>> =A0 =A0 =A0 =A0storage =A0 =A0 FAULTED =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 1 >>> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0DEGRADED =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 UNAVAIL =A0 =A0 =A00 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> >>> Note: That's my recreation of the output... not the actual text. >>> >>> At this point, I was unable to to do anything with the pool... and all >>> data was inaccessible. =A0Fortunately, the after sitting pulled for a >>> bit, I tried putting the failing drive back into the array, and it >>> booted properly. =A0Of course, I still want to replace it, but this is >>> what happens when I try to take it offline: >>> >>> monolith# zpool status storage >>> =A0pool: storage >>> =A0state: ONLINE >>> =A0scrub: none requested >>> config: >>> >>> =A0 =A0 =A0 =A0NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >>> =A0 =A0 =A0 =A0storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >>> =A0 =A0 =A0 =A0 =A0raidz1 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> =A0 =A0 =A0 =A0 =A0 =A0ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >>> >>> errors: No known data errors >>> monolith# zpool offline storage ad6 >>> cannot offline ad6: no valid replicas >>> monolith# uname -a >>> FreeBSD monolith 8.0-RC1 FreeBSD 8.0-RC1 #2 r197370: Sun Sep 20 >>> 15:32:08 CST 2009 =A0 =A0 k@monolith:/usr/obj/usr/src/sys/MONOLITH =A0a= md64 >>> >>> If the array is online and healthy, why can't I simply offline a drive >>> and then replace it afterwards? =A0Any thoughts? =A0 Also, how does a >>> degraded raidz1 array end up faulting the entire pool? >>> >>> Thanks, >>> -kurt >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>> >>> !DSPAM:2,4ab6ac55126167777521459! >>> >>> >> >> I'm not sure why it would be giving you that message. =A0In a raidz1 you >> should be able to sustain one failure. =A0The only thing that comes to m= ind >> this early in the morning would be that somehow your data replication ac= ross >> your discs isn't totally in sync. =A0I would suggest you try a scrub and= then >> see if you can remove the drive afterwards. >> >> Aaron Hurt >> Managing Partner >> Flex I.T., LLC >> 611 Commerce Street >> Suite 3117 >> Nashville, TN =A037203 >> Phone: 615.438.7101 >> E-mail: aaron@goflexitllc.com >> >> > From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 01:26:16 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 03713106568F for ; Tue, 22 Sep 2009 01:26:16 +0000 (UTC) (envelope-from aaron@goflexitllc.com) Received: from mail.goflexitllc.com (mail.goflexitllc.com [70.38.81.12]) by mx1.freebsd.org (Postfix) with ESMTP id 8D3008FC0A for ; Tue, 22 Sep 2009 01:26:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=goflexitllc.com; h=message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; s=zeta; bh=YCIfcaaVuOEmTzmVmWMPPyjhNy 0=; b=Kh5RsCOrqfvIi6djBXcOOlQKZqIdNaNVAsVNQ9ixL7xM18671LZQwgdpPN cAWnV+3ylWox6WQpJOcSu5pRNQIDnhwD+l7Mb+8Tu2nPVgTP7qF7djXlZVrRwJl7 76ONNF DomainKey-Signature: a=rsa-sha1; c=nofws; d=goflexitllc.com; h=message-id :date:from:mime-version:to:cc:subject:references:in-reply-to: content-type; q=dns; s=zeta; b=nu51R7WmKRjhaQJwgbtbbAePNBiOJlz6R qbT3gVkLEnQRT1MV+6cSnJCNXwzRC9DXoWkgjmKLZD4/i+Z3ZlBCdPjujqem5R2y +VRvuS1C3CYMjmBpLT6ck/znDLGNzzw Received: (qmail 19872 invoked by uid 89); 22 Sep 2009 01:29:26 -0000 Received: (simscan 1.4.1 ppid 19848 pid 19854 t 0.3635s) (scanners: regex: 1.4.1 attach: 1.4.1 clamav: 0.95.1/m:); 22 Sep 0109 01:29:26 -0000 DomainKey-Status: no signature X-Originating-IP: 69.27.151.4 Received: from temp4.wavelinx.net (HELO ?172.16.1.128?) (aaron@goflexitllc.com@69.27.151.4) by mail.goflexitllc.com with ESMTPA; 22 Sep 2009 01:29:26 -0000 Message-ID: <4AB827AA.9080109@goflexitllc.com> Date: Mon, 21 Sep 2009 20:26:02 -0500 From: Aaron Hurt User-Agent: Thunderbird 2.0.0.22 (X11/20090719) MIME-Version: 1.0 To: Kurt Touet References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <4AB757E4.5060501@goflexitllc.com> <2a5e326f0909211021o431ef53bh3077589efb0bed6c@mail.gmail.com> <2a5e326f0909211044k349d6bc1lb9bd9094e7216e41@mail.gmail.com> In-Reply-To: <2a5e326f0909211044k349d6bc1lb9bd9094e7216e41@mail.gmail.com> Content-Type: multipart/mixed; boundary="------------030606010007030500020907" X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 01:26:16 -0000 This is a multi-part message in MIME format. --------------030606010007030500020907 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Kurt Touet wrote: > Apparently you were right Aaron: > > monolith# zpool scrub storage > monolith# zpool status storage > pool: storage > state: ONLINE > scrub: resilver completed after 0h1m with 0 errors on Mon Sep 21 11:37:24 2009 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad14 ONLINE 0 0 0 1.46M resilvered > ad6 ONLINE 0 0 0 2K resilvered > ad12 ONLINE 0 0 0 3K resilvered > ad4 ONLINE 0 0 0 3K resilvered > > errors: No known data errors > monolith# zpool offline storage ad6 > monolith# zpool online storage ad6 > monolith# zpool status storage > pool: storage > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Mon Sep 21 11:40:12 2009 > config: > > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad14 ONLINE 0 0 0 67.5K resilvered > ad6 ONLINE 0 0 0 671K resilvered > ad12 ONLINE 0 0 0 67.5K resilvered > ad4 ONLINE 0 0 0 53K resilvered > > errors: No known data errors > > > I wonder then, with the storage array reporting itself as healthy, how > did it know that one drive had desynced data, and why wouldn't that > have shown up as an error like DEGRADED? > > Cheers, > -kurt > > > On Mon, Sep 21, 2009 at 11:21 AM, Kurt Touet wrote: > >> I thought about that possibility as well.. but I had scrubbed the >> array within 10 days. I'll give it a shot again today and see if that >> brings up any other errors (or allows me to offline the drive >> afterwards). >> >> Cheers, >> -kurt >> >> On Mon, Sep 21, 2009 at 4:39 AM, Aaron Hurt wrote: >> >>> Kurt Touet wrote: >>> >>>> I am using ZFS pool based on a 4-drive raidz1 setup for storage. I >>>> believe that one of the drives is failing, and I'd like to >>>> remove/replace it. The drive has been causing some issues (such as >>>> becoming non-responsive and hanging the system with timeouts), so I'd >>>> like to offline it, and then run in degraded mode until I can grab a >>>> new drive (tomorrow). However, when I disconnected the drive (pulled >>>> the plug, not using a zpool offline command), the following occurred: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> storage FAULTED 0 0 1 >>>> raidz1 DEGRADED 0 0 0 >>>> ad14 ONLINE 0 0 0 >>>> ad6 UNAVAIL 0 0 0 >>>> ad12 ONLINE 0 0 0 >>>> ad4 ONLINE 0 0 0 >>>> >>>> Note: That's my recreation of the output... not the actual text. >>>> >>>> At this point, I was unable to to do anything with the pool... and all >>>> data was inaccessible. Fortunately, the after sitting pulled for a >>>> bit, I tried putting the failing drive back into the array, and it >>>> booted properly. Of course, I still want to replace it, but this is >>>> what happens when I try to take it offline: >>>> >>>> monolith# zpool status storage >>>> pool: storage >>>> state: ONLINE >>>> scrub: none requested >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> storage ONLINE 0 0 0 >>>> raidz1 ONLINE 0 0 0 >>>> ad14 ONLINE 0 0 0 >>>> ad6 ONLINE 0 0 0 >>>> ad12 ONLINE 0 0 0 >>>> ad4 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> monolith# zpool offline storage ad6 >>>> cannot offline ad6: no valid replicas >>>> monolith# uname -a >>>> FreeBSD monolith 8.0-RC1 FreeBSD 8.0-RC1 #2 r197370: Sun Sep 20 >>>> 15:32:08 CST 2009 k@monolith:/usr/obj/usr/src/sys/MONOLITH amd64 >>>> >>>> If the array is online and healthy, why can't I simply offline a drive >>>> and then replace it afterwards? Any thoughts? Also, how does a >>>> degraded raidz1 array end up faulting the entire pool? >>>> >>>> Thanks, >>>> -kurt >>>> _______________________________________________ >>>> freebsd-fs@freebsd.org mailing list >>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >>>> >>>> >>>> >>>> >>>> >>> I'm not sure why it would be giving you that message. In a raidz1 you >>> should be able to sustain one failure. The only thing that comes to mind >>> this early in the morning would be that somehow your data replication across >>> your discs isn't totally in sync. I would suggest you try a scrub and then >>> see if you can remove the drive afterwards. >>> >>> Aaron Hurt >>> Managing Partner >>> Flex I.T., LLC >>> 611 Commerce Street >>> Suite 3117 >>> Nashville, TN 37203 >>> Phone: 615.438.7101 >>> E-mail: aaron@goflexitllc.com >>> >>> >>> > > !DSPAM:2,4ab7bc3e126161245783902! > > I had a buggy ata controller that was causing similar problems for me once upon a time. I replaced the controller card and drive cables and never had any more issues with it. That's still one of those things I just scratch my head over. I'm far from a ZFS code expert so I couldn't even begin to tell you the underlying reasons such things might be related...just my two cents worth of experience. Anyways...glad it's working for you now. -- Aaron Hurt Managing Partner Flex I.T., LLC 611 Commerce Street Suite 3117 Nashville, TN 37203 Phone: 615.438.7101 E-mail: aaron@goflexitllc.com --------------030606010007030500020907-- From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 04:16:19 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D53E7106566B; Tue, 22 Sep 2009 04:16:19 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id ADE788FC18; Tue, 22 Sep 2009 04:16:19 +0000 (UTC) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8M4GJ3E076371; Tue, 22 Sep 2009 04:16:19 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8M4GJCf076367; Tue, 22 Sep 2009 04:16:19 GMT (envelope-from linimon) Date: Tue, 22 Sep 2009 04:16:19 GMT Message-Id: <200909220416.n8M4GJCf076367@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/139039: [zfs] zpool scrub makes system unbearably slow X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 04:16:19 -0000 Old Synopsis: zpool scrub makes system unbearably slow New Synopsis: [zfs] zpool scrub makes system unbearably slow Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Sep 22 04:16:07 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=139039 From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 12:56:29 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B3BDC1065679 for ; Tue, 22 Sep 2009 12:56:29 +0000 (UTC) (envelope-from pjd@garage.freebsd.pl) Received: from mail.garage.freebsd.pl (chello087206049004.chello.pl [87.206.49.4]) by mx1.freebsd.org (Postfix) with ESMTP id 01ED48FC1F for ; Tue, 22 Sep 2009 12:56:28 +0000 (UTC) Received: by mail.garage.freebsd.pl (Postfix, from userid 65534) id BD59D45CDC; Tue, 22 Sep 2009 14:56:26 +0200 (CEST) Received: from localhost (pdawidek.wheel.pl [10.0.1.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.garage.freebsd.pl (Postfix) with ESMTP id BBC2E45CBA; Tue, 22 Sep 2009 14:56:21 +0200 (CEST) Date: Tue, 22 Sep 2009 14:56:25 +0200 From: Pawel Jakub Dawidek To: Kurt Touet Message-ID: <20090922125625.GJ6038@garage.freebsd.pl> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="VJJoKLVEFXdmHQwR" Content-Disposition: inline In-Reply-To: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> User-Agent: Mutt/1.4.2.3i X-PGP-Key-URL: http://people.freebsd.org/~pjd/pjd.asc X-OS: FreeBSD 8.0-CURRENT i386 X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on mail.garage.freebsd.pl X-Spam-Level: X-Spam-Status: No, score=-5.9 required=4.5 tests=ALL_TRUSTED,BAYES_00 autolearn=ham version=3.0.4 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 12:56:29 -0000 --VJJoKLVEFXdmHQwR Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, Sep 20, 2009 at 04:00:10PM -0600, Kurt Touet wrote: > I am using ZFS pool based on a 4-drive raidz1 setup for storage. I > believe that one of the drives is failing, and I'd like to > remove/replace it. The drive has been causing some issues (such as > becoming non-responsive and hanging the system with timeouts), so I'd > like to offline it, and then run in degraded mode until I can grab a > new drive (tomorrow). However, when I disconnected the drive (pulled > the plug, not using a zpool offline command), the following occurred: >=20 > NAME STATE READ WRITE CKSUM > storage FAULTED 0 0 1 > raidz1 DEGRADED 0 0 0 > ad14 ONLINE 0 0 0 > ad6 UNAVAIL 0 0 0 > ad12 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 >=20 > Note: That's my recreation of the output... not the actual text. >=20 > At this point, I was unable to to do anything with the pool... and all > data was inaccessible. Fortunately, the after sitting pulled for a > bit, I tried putting the failing drive back into the array, and it > booted properly. Of course, I still want to replace it, but this is > what happens when I try to take it offline: >=20 > monolith# zpool status storage > pool: storage > state: ONLINE > scrub: none requested > config: >=20 > NAME STATE READ WRITE CKSUM > storage ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad14 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > ad12 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 >=20 > errors: No known data errors > monolith# zpool offline storage ad6 > cannot offline ad6: no valid replicas Could you send the output of: # apply "zdb -l /dev/%1" ad{4,6,12,14} --=20 Pawel Jakub Dawidek http://www.wheel.pl pjd@FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! --VJJoKLVEFXdmHQwR Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQFKuMl5ForvXbEpPzQRAnPIAKCjK7am3F1WUvEHMwtIeXckcy36UgCg4n/z tkqBHfYKHh2q419EqdOlBr8= =A7sR -----END PGP SIGNATURE----- --VJJoKLVEFXdmHQwR-- From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 14:19:51 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D022E1065670 for ; Tue, 22 Sep 2009 14:19:51 +0000 (UTC) (envelope-from james-freebsd-fs2@jrv.org) Received: from mail.jrv.org (adsl-70-243-84-13.dsl.austtx.swbell.net [70.243.84.13]) by mx1.freebsd.org (Postfix) with ESMTP id 30C248FC13 for ; Tue, 22 Sep 2009 14:19:51 +0000 (UTC) Received: from kremvax.housenet.jrv (kremvax.housenet.jrv [192.168.3.124]) by mail.jrv.org (8.14.3/8.14.3) with ESMTP id n8MDx5gB041129; Tue, 22 Sep 2009 08:59:05 -0500 (CDT) (envelope-from james-freebsd-fs2@jrv.org) Authentication-Results: mail.jrv.org; domainkeys=pass (testing) header.from=james-freebsd-fs2@jrv.org DomainKey-Signature: a=rsa-sha1; s=enigma; d=jrv.org; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:cc:subject: references:in-reply-to:content-type:content-transfer-encoding; b=dSnM3sQq7MIj56bcj8weDP1dio7QHa55TiZ02QqM3kIsTZFBi/kRXpua9Tgo0vaTJ kY5KnqplfiJBUwg75EEOsdNjBsql585XnCrFpc9+ChCb49jDIGkyCaGcohj9/8D/K0f wsnDj4TLGym6QNpssHl4AA+RGekdO0LqNIDdDDU= Message-ID: <4AB8D829.4060301@jrv.org> Date: Tue, 22 Sep 2009 08:59:05 -0500 From: "James R. Van Artsdalen" User-Agent: Thunderbird 2.0.0.23 (Macintosh/20090812) MIME-Version: 1.0 To: Kurt Touet References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <4AB757E4.5060501@goflexitllc.com> <2a5e326f0909211021o431ef53bh3077589efb0bed6c@mail.gmail.com> <2a5e326f0909211044k349d6bc1lb9bd9094e7216e41@mail.gmail.com> In-Reply-To: <2a5e326f0909211044k349d6bc1lb9bd9094e7216e41@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-fs Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 14:19:51 -0000 Kurt Touet wrote: > I wonder then, with the storage array reporting itself as healthy, how > did it know that one drive had desynced data, and why wouldn't that > have shown up as an error like DEGRADED? The uberblock on each ZFS disk contains a txtag, in effect a revision number. When a pool is imported and one drive's txtag is older than the others then that drive needs resilvering. From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 17:07:24 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A60EB1065672; Tue, 22 Sep 2009 17:07:24 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-yx0-f184.google.com (mail-yx0-f184.google.com [209.85.210.184]) by mx1.freebsd.org (Postfix) with ESMTP id 26CD88FC0C; Tue, 22 Sep 2009 17:07:23 +0000 (UTC) Received: by yxe14 with SMTP id 14so5027777yxe.7 for ; Tue, 22 Sep 2009 10:07:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=z9v/3Uy0J6K9JFkYH4plF4wG3fvZhZiPlgnIOtY1fyY=; b=PaIzByW8Z8lYHhFnqq/e3Y/1PJyGluMfSt2DA/QwH4UAu9/FcqQ0Psf9KZzJfSAmX+ J1T8s+TOomZa2XljVhOVjeUanSuWfYeZwrVWDipR5fkiNwFR/shA73rqHGaBpi31HqjJ RlphN3f6Q97WbFQQ3AXf96qvbZqovK7HlFKDE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=v0wSA79Wt8n/yBTOPERmnRfrPTD5XGTS0e/VmxANk1427Fo3BM2Rfi1GZW5r/5vyll RhM7VEash07xYlwWzT/xjt3A4z1duB5RpPQKkP9oL7apAzCsIn3FCt3kEUtjMc+kNjlo iW0nqGmJLMnJLBc5YzG8wvDhlEcfsWAEf22nM= MIME-Version: 1.0 Received: by 10.100.55.18 with SMTP id d18mr1257610ana.80.1253639243393; Tue, 22 Sep 2009 10:07:23 -0700 (PDT) In-Reply-To: <20090922125625.GJ6038@garage.freebsd.pl> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <20090922125625.GJ6038@garage.freebsd.pl> Date: Tue, 22 Sep 2009 11:07:23 -0600 Message-ID: <2a5e326f0909221007m71a84f34r8c07648f8bc8b1ac@mail.gmail.com> From: Kurt Touet To: Pawel Jakub Dawidek Content-Type: multipart/mixed; boundary=001485f6d73ac76cd604742da100 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 17:07:24 -0000 --001485f6d73ac76cd604742da100 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On Tue, Sep 22, 2009 at 6:56 AM, Pawel Jakub Dawidek wrot= e: > On Sun, Sep 20, 2009 at 04:00:10PM -0600, Kurt Touet wrote: >> I am using ZFS pool based on a 4-drive raidz1 setup for storage. =A0I >> believe that one of the drives is failing, and I'd like to >> remove/replace it. =A0The drive has been causing some issues (such as >> becoming non-responsive and hanging the system with timeouts), so I'd >> like to offline it, and then run in degraded mode until I can grab a >> new drive (tomorrow). =A0However, when I disconnected the drive (pulled >> the plug, not using a zpool offline command), the following occurred: >> >> =A0 =A0 =A0 =A0 NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >> =A0 =A0 =A0 =A0 storage =A0 =A0 FAULTED =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 = 1 >> =A0 =A0 =A0 =A0 =A0 raidz1 =A0 =A0DEGRADED =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad6 =A0 =A0 UNAVAIL =A0 =A0 =A00 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> >> Note: That's my recreation of the output... not the actual text. >> >> At this point, I was unable to to do anything with the pool... and all >> data was inaccessible. =A0Fortunately, the after sitting pulled for a >> bit, I tried putting the failing drive back into the array, and it >> booted properly. =A0Of course, I still want to replace it, but this is >> what happens when I try to take it offline: >> >> monolith# zpool status storage >> =A0 pool: storage >> =A0state: ONLINE >> =A0scrub: none requested >> config: >> >> =A0 =A0 =A0 =A0 NAME =A0 =A0 =A0 =A0STATE =A0 =A0 READ WRITE CKSUM >> =A0 =A0 =A0 =A0 storage =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0 0 >> =A0 =A0 =A0 =A0 =A0 raidz1 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A0= 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad14 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad6 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad12 =A0 =A0ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> =A0 =A0 =A0 =A0 =A0 =A0 ad4 =A0 =A0 ONLINE =A0 =A0 =A0 0 =A0 =A0 0 =A0 = =A0 0 >> >> errors: No known data errors >> monolith# zpool offline storage ad6 >> cannot offline ad6: no valid replicas > > Could you send the output of: > > =A0 =A0 =A0 =A0# apply "zdb -l /dev/%1" ad{4,6,12,14} > Sure thing. --001485f6d73ac76cd604742da100 Content-Type: text/plain; charset=US-ASCII; name="zdb.txt" Content-Disposition: attachment; filename="zdb.txt" Content-Transfer-Encoding: base64 X-Attachment-Id: f_fzwwek771 LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KTEFCRUwgMAotLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogICAgdmVyc2lvbj0xMwog ICAgbmFtZT0nc3RvcmFnZScKICAgIHN0YXRlPTAKICAgIHR4Zz0zNjg4OTk1CiAgICBwb29sX2d1 aWQ9NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAgaG9zdGlkPTIyOTI0NTIyNzAKICAgIGhvc3RuYW1l PSdtb25vbGl0aCcKICAgIHRvcF9ndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICBndWlkPTY1 MDQ2MTM0NDAxNTE2NTk4NDgKICAgIHZkZXZfdHJlZQogICAgICAgIHR5cGU9J3JhaWR6JwogICAg ICAgIGlkPTAKICAgICAgICBndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICAgICAgbnBhcml0 eT0xCiAgICAgICAgbWV0YXNsYWJfYXJyYXk9MTQKICAgICAgICBtZXRhc2xhYl9zaGlmdD0zMgog ICAgICAgIGFzaGlmdD05CiAgICAgICAgYXNpemU9NjAwMTE4ODE0MzEwNAogICAgICAgIGlzX2xv Zz0wCiAgICAgICAgY2hpbGRyZW5bMF0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAg ICAgICAgICAgICBpZD0wCiAgICAgICAgICAgICAgICBndWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYK ICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxNCcKICAgICAgICAgICAgICAgIHdob2xlX2Rp c2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNwogICAgICAgIGNoaWxkcmVuWzFdCiAgICAgICAg ICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MQogICAgICAgICAgICAgICAg Z3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDYn CiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzYKICAg ICAgICBjaGlsZHJlblsyXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAg ICAgIGlkPTIKICAgICAgICAgICAgICAgIGd1aWQ9NjgxNzIwMzkxNzIzNDI5NzUwMwogICAgICAg ICAgICAgICAgcGF0aD0nL2Rldi9hZDEyJwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAg ICAgICAgICAgICAgICBEVEw9MzM1CiAgICAgICAgY2hpbGRyZW5bM10KICAgICAgICAgICAgICAg IHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0zCiAgICAgICAgICAgICAgICBndWlkPTY1 MDQ2MTM0NDAxNTE2NTk4NDgKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQ0JwogICAgICAg ICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM0Ci0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkxBQkVMIDEKLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICAgIHZlcnNpb249MTMKICAgIG5hbWU9J3N0 b3JhZ2UnCiAgICBzdGF0ZT0wCiAgICB0eGc9MzY4ODk5NQogICAgcG9vbF9ndWlkPTU1NzgwMjM0 NzYwOTAyNjc1MTcKICAgIGhvc3RpZD0yMjkyNDUyMjcwCiAgICBob3N0bmFtZT0nbW9ub2xpdGgn CiAgICB0b3BfZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgZ3VpZD02NTA0NjEzNDQwMTUx NjU5ODQ4CiAgICB2ZGV2X3RyZWUKICAgICAgICB0eXBlPSdyYWlkeicKICAgICAgICBpZD0wCiAg ICAgICAgZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgICAgIG5wYXJpdHk9MQogICAgICAg IG1ldGFzbGFiX2FycmF5PTE0CiAgICAgICAgbWV0YXNsYWJfc2hpZnQ9MzIKICAgICAgICBhc2hp ZnQ9OQogICAgICAgIGFzaXplPTYwMDExODgxNDMxMDQKICAgICAgICBpc19sb2c9MAogICAgICAg IGNoaWxkcmVuWzBdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAg aWQ9MAogICAgICAgICAgICAgICAgZ3VpZD00Njk5OTg4MzMxMzM1ODk5MTE2CiAgICAgICAgICAg ICAgICBwYXRoPScvZGV2L2FkMTQnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAg ICAgICAgICAgIERUTD0zMzcKICAgICAgICBjaGlsZHJlblsxXQogICAgICAgICAgICAgICAgdHlw ZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTEKICAgICAgICAgICAgICAgIGd1aWQ9MTI0ODU0 MTY0NjIyMzE3MTU3NzAKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQ2JwogICAgICAgICAg ICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM2CiAgICAgICAgY2hpbGRy ZW5bMl0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0yCiAg ICAgICAgICAgICAgICBndWlkPTY4MTcyMDM5MTcyMzQyOTc1MDMKICAgICAgICAgICAgICAgIHBh dGg9Jy9kZXYvYWQxMicKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAg ICAgRFRMPTMzNQogICAgICAgIGNoaWxkcmVuWzNdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNr JwogICAgICAgICAgICAgICAgaWQ9MwogICAgICAgICAgICAgICAgZ3VpZD02NTA0NjEzNDQwMTUx NjU5ODQ4CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNCcKICAgICAgICAgICAgICAgIHdo b2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNAotLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpMQUJFTCAyCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tCiAgICB2ZXJzaW9uPTEzCiAgICBuYW1lPSdzdG9yYWdlJwogICAg c3RhdGU9MAogICAgdHhnPTM2ODg5OTUKICAgIHBvb2xfZ3VpZD01NTc4MDIzNDc2MDkwMjY3NTE3 CiAgICBob3N0aWQ9MjI5MjQ1MjI3MAogICAgaG9zdG5hbWU9J21vbm9saXRoJwogICAgdG9wX2d1 aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgIGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0OAogICAg dmRldl90cmVlCiAgICAgICAgdHlwZT0ncmFpZHonCiAgICAgICAgaWQ9MAogICAgICAgIGd1aWQ9 MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgICAgICBucGFyaXR5PTEKICAgICAgICBtZXRhc2xhYl9h cnJheT0xNAogICAgICAgIG1ldGFzbGFiX3NoaWZ0PTMyCiAgICAgICAgYXNoaWZ0PTkKICAgICAg ICBhc2l6ZT02MDAxMTg4MTQzMTA0CiAgICAgICAgaXNfbG9nPTAKICAgICAgICBjaGlsZHJlblsw XQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTAKICAgICAg ICAgICAgICAgIGd1aWQ9NDY5OTk4ODMzMTMzNTg5OTExNgogICAgICAgICAgICAgICAgcGF0aD0n L2Rldi9hZDE0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBE VEw9MzM3CiAgICAgICAgY2hpbGRyZW5bMV0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAg ICAgICAgICAgICAgICBpZD0xCiAgICAgICAgICAgICAgICBndWlkPTEyNDg1NDE2NDYyMjMxNzE1 NzcwCiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNicKICAgICAgICAgICAgICAgIHdob2xl X2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNgogICAgICAgIGNoaWxkcmVuWzJdCiAgICAg ICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MgogICAgICAgICAgICAg ICAgZ3VpZD02ODE3MjAzOTE3MjM0Mjk3NTAzCiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2Fk MTInCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzUK ICAgICAgICBjaGlsZHJlblszXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAg ICAgICAgIGlkPTMKICAgICAgICAgICAgICAgIGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0OAogICAg ICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDQnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAK ICAgICAgICAgICAgICAgIERUTD0zMzQKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0KTEFCRUwgMwotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLQogICAgdmVyc2lvbj0xMwogICAgbmFtZT0nc3RvcmFnZScKICAgIHN0YXRlPTAKICAg IHR4Zz0zNjg4OTk1CiAgICBwb29sX2d1aWQ9NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAgaG9zdGlk PTIyOTI0NTIyNzAKICAgIGhvc3RuYW1lPSdtb25vbGl0aCcKICAgIHRvcF9ndWlkPTE2MjYxMzY0 Njc3MzY2NzA1NjUwCiAgICBndWlkPTY1MDQ2MTM0NDAxNTE2NTk4NDgKICAgIHZkZXZfdHJlZQog ICAgICAgIHR5cGU9J3JhaWR6JwogICAgICAgIGlkPTAKICAgICAgICBndWlkPTE2MjYxMzY0Njc3 MzY2NzA1NjUwCiAgICAgICAgbnBhcml0eT0xCiAgICAgICAgbWV0YXNsYWJfYXJyYXk9MTQKICAg ICAgICBtZXRhc2xhYl9zaGlmdD0zMgogICAgICAgIGFzaGlmdD05CiAgICAgICAgYXNpemU9NjAw MTE4ODE0MzEwNAogICAgICAgIGlzX2xvZz0wCiAgICAgICAgY2hpbGRyZW5bMF0KICAgICAgICAg ICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0wCiAgICAgICAgICAgICAgICBn dWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxNCcK ICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNwogICAg ICAgIGNoaWxkcmVuWzFdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAg ICAgaWQ9MQogICAgICAgICAgICAgICAgZ3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgICAg ICAgICAgICAgcGF0aD0nL2Rldi9hZDYnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAg ICAgICAgICAgICAgIERUTD0zMzYKICAgICAgICBjaGlsZHJlblsyXQogICAgICAgICAgICAgICAg dHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTIKICAgICAgICAgICAgICAgIGd1aWQ9Njgx NzIwMzkxNzIzNDI5NzUwMwogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDEyJwogICAgICAg ICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM1CiAgICAgICAgY2hp bGRyZW5bM10KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0z CiAgICAgICAgICAgICAgICBndWlkPTY1MDQ2MTM0NDAxNTE2NTk4NDgKICAgICAgICAgICAgICAg IHBhdGg9Jy9kZXYvYWQ0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAg ICAgICBEVEw9MzM0Ci0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t CkxBQkVMIDAKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICAg IHZlcnNpb249MTMKICAgIG5hbWU9J3N0b3JhZ2UnCiAgICBzdGF0ZT0wCiAgICB0eGc9MzY4ODk5 NQogICAgcG9vbF9ndWlkPTU1NzgwMjM0NzYwOTAyNjc1MTcKICAgIGhvc3RpZD0yMjkyNDUyMjcw CiAgICBob3N0bmFtZT0nbW9ub2xpdGgnCiAgICB0b3BfZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1 MAogICAgZ3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgdmRldl90cmVlCiAgICAgICAgdHlw ZT0ncmFpZHonCiAgICAgICAgaWQ9MAogICAgICAgIGd1aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAK ICAgICAgICBucGFyaXR5PTEKICAgICAgICBtZXRhc2xhYl9hcnJheT0xNAogICAgICAgIG1ldGFz bGFiX3NoaWZ0PTMyCiAgICAgICAgYXNoaWZ0PTkKICAgICAgICBhc2l6ZT02MDAxMTg4MTQzMTA0 CiAgICAgICAgaXNfbG9nPTAKICAgICAgICBjaGlsZHJlblswXQogICAgICAgICAgICAgICAgdHlw ZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTAKICAgICAgICAgICAgICAgIGd1aWQ9NDY5OTk4 ODMzMTMzNTg5OTExNgogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDE0JwogICAgICAgICAg ICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM3CiAgICAgICAgY2hpbGRy ZW5bMV0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0xCiAg ICAgICAgICAgICAgICBndWlkPTEyNDg1NDE2NDYyMjMxNzE1NzcwCiAgICAgICAgICAgICAgICBw YXRoPScvZGV2L2FkNicKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAg ICAgRFRMPTMzNgogICAgICAgIGNoaWxkcmVuWzJdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNr JwogICAgICAgICAgICAgICAgaWQ9MgogICAgICAgICAgICAgICAgZ3VpZD02ODE3MjAzOTE3MjM0 Mjk3NTAzCiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTInCiAgICAgICAgICAgICAgICB3 aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzUKICAgICAgICBjaGlsZHJlblszXQog ICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTMKICAgICAgICAg ICAgICAgIGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0OAogICAgICAgICAgICAgICAgcGF0aD0nL2Rl di9hZDQnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0z MzQKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KTEFCRUwgMQot LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogICAgdmVyc2lvbj0x MwogICAgbmFtZT0nc3RvcmFnZScKICAgIHN0YXRlPTAKICAgIHR4Zz0zNjg4OTk1CiAgICBwb29s X2d1aWQ9NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAgaG9zdGlkPTIyOTI0NTIyNzAKICAgIGhvc3Ru YW1lPSdtb25vbGl0aCcKICAgIHRvcF9ndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICBndWlk PTEyNDg1NDE2NDYyMjMxNzE1NzcwCiAgICB2ZGV2X3RyZWUKICAgICAgICB0eXBlPSdyYWlkeicK ICAgICAgICBpZD0wCiAgICAgICAgZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgICAgIG5w YXJpdHk9MQogICAgICAgIG1ldGFzbGFiX2FycmF5PTE0CiAgICAgICAgbWV0YXNsYWJfc2hpZnQ9 MzIKICAgICAgICBhc2hpZnQ9OQogICAgICAgIGFzaXplPTYwMDExODgxNDMxMDQKICAgICAgICBp c19sb2c9MAogICAgICAgIGNoaWxkcmVuWzBdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwog ICAgICAgICAgICAgICAgaWQ9MAogICAgICAgICAgICAgICAgZ3VpZD00Njk5OTg4MzMxMzM1ODk5 MTE2CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTQnCiAgICAgICAgICAgICAgICB3aG9s ZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzcKICAgICAgICBjaGlsZHJlblsxXQogICAg ICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTEKICAgICAgICAgICAg ICAgIGd1aWQ9MTI0ODU0MTY0NjIyMzE3MTU3NzAKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYv YWQ2JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM2 CiAgICAgICAgY2hpbGRyZW5bMl0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAg ICAgICAgICBpZD0yCiAgICAgICAgICAgICAgICBndWlkPTY4MTcyMDM5MTcyMzQyOTc1MDMKICAg ICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxMicKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9 MAogICAgICAgICAgICAgICAgRFRMPTMzNQogICAgICAgIGNoaWxkcmVuWzNdCiAgICAgICAgICAg ICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MwogICAgICAgICAgICAgICAgZ3Vp ZD02NTA0NjEzNDQwMTUxNjU5ODQ4CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNCcKICAg ICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNAotLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpMQUJFTCAyCi0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAgICB2ZXJzaW9uPTEzCiAgICBuYW1l PSdzdG9yYWdlJwogICAgc3RhdGU9MAogICAgdHhnPTM2ODg5OTUKICAgIHBvb2xfZ3VpZD01NTc4 MDIzNDc2MDkwMjY3NTE3CiAgICBob3N0aWQ9MjI5MjQ1MjI3MAogICAgaG9zdG5hbWU9J21vbm9s aXRoJwogICAgdG9wX2d1aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgIGd1aWQ9MTI0ODU0MTY0 NjIyMzE3MTU3NzAKICAgIHZkZXZfdHJlZQogICAgICAgIHR5cGU9J3JhaWR6JwogICAgICAgIGlk PTAKICAgICAgICBndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICAgICAgbnBhcml0eT0xCiAg ICAgICAgbWV0YXNsYWJfYXJyYXk9MTQKICAgICAgICBtZXRhc2xhYl9zaGlmdD0zMgogICAgICAg IGFzaGlmdD05CiAgICAgICAgYXNpemU9NjAwMTE4ODE0MzEwNAogICAgICAgIGlzX2xvZz0wCiAg ICAgICAgY2hpbGRyZW5bMF0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAg ICAgICBpZD0wCiAgICAgICAgICAgICAgICBndWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAgICAg ICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxNCcKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAog ICAgICAgICAgICAgICAgRFRMPTMzNwogICAgICAgIGNoaWxkcmVuWzFdCiAgICAgICAgICAgICAg ICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MQogICAgICAgICAgICAgICAgZ3VpZD0x MjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDYnCiAgICAg ICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzYKICAgICAgICBj aGlsZHJlblsyXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlk PTIKICAgICAgICAgICAgICAgIGd1aWQ9NjgxNzIwMzkxNzIzNDI5NzUwMwogICAgICAgICAgICAg ICAgcGF0aD0nL2Rldi9hZDEyJwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAg ICAgICAgICBEVEw9MzM1CiAgICAgICAgY2hpbGRyZW5bM10KICAgICAgICAgICAgICAgIHR5cGU9 J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0zCiAgICAgICAgICAgICAgICBndWlkPTY1MDQ2MTM0 NDAxNTE2NTk4NDgKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQ0JwogICAgICAgICAgICAg ICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM0Ci0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkxBQkVMIDMKLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICAgIHZlcnNpb249MTMKICAgIG5hbWU9J3N0b3JhZ2Un CiAgICBzdGF0ZT0wCiAgICB0eGc9MzY4ODk5NQogICAgcG9vbF9ndWlkPTU1NzgwMjM0NzYwOTAy Njc1MTcKICAgIGhvc3RpZD0yMjkyNDUyMjcwCiAgICBob3N0bmFtZT0nbW9ub2xpdGgnCiAgICB0 b3BfZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgZ3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3 MAogICAgdmRldl90cmVlCiAgICAgICAgdHlwZT0ncmFpZHonCiAgICAgICAgaWQ9MAogICAgICAg IGd1aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgICAgICBucGFyaXR5PTEKICAgICAgICBtZXRh c2xhYl9hcnJheT0xNAogICAgICAgIG1ldGFzbGFiX3NoaWZ0PTMyCiAgICAgICAgYXNoaWZ0PTkK ICAgICAgICBhc2l6ZT02MDAxMTg4MTQzMTA0CiAgICAgICAgaXNfbG9nPTAKICAgICAgICBjaGls ZHJlblswXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTAK ICAgICAgICAgICAgICAgIGd1aWQ9NDY5OTk4ODMzMTMzNTg5OTExNgogICAgICAgICAgICAgICAg cGF0aD0nL2Rldi9hZDE0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAg ICAgICBEVEw9MzM3CiAgICAgICAgY2hpbGRyZW5bMV0KICAgICAgICAgICAgICAgIHR5cGU9J2Rp c2snCiAgICAgICAgICAgICAgICBpZD0xCiAgICAgICAgICAgICAgICBndWlkPTEyNDg1NDE2NDYy MjMxNzE1NzcwCiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNicKICAgICAgICAgICAgICAg IHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNgogICAgICAgIGNoaWxkcmVuWzJd CiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MgogICAgICAg ICAgICAgICAgZ3VpZD02ODE3MjAzOTE3MjM0Mjk3NTAzCiAgICAgICAgICAgICAgICBwYXRoPScv ZGV2L2FkMTInCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERU TD0zMzUKICAgICAgICBjaGlsZHJlblszXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAg ICAgICAgICAgICAgIGlkPTMKICAgICAgICAgICAgICAgIGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0 OAogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDQnCiAgICAgICAgICAgICAgICB3aG9sZV9k aXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzQKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0KTEFCRUwgMAotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLQogICAgdmVyc2lvbj0xMwogICAgbmFtZT0nc3RvcmFnZScKICAgIHN0YXRl PTAKICAgIHR4Zz0zNjg4OTk1CiAgICBwb29sX2d1aWQ9NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAg aG9zdGlkPTIyOTI0NTIyNzAKICAgIGhvc3RuYW1lPSdtb25vbGl0aCcKICAgIHRvcF9ndWlkPTE2 MjYxMzY0Njc3MzY2NzA1NjUwCiAgICBndWlkPTY4MTcyMDM5MTcyMzQyOTc1MDMKICAgIHZkZXZf dHJlZQogICAgICAgIHR5cGU9J3JhaWR6JwogICAgICAgIGlkPTAKICAgICAgICBndWlkPTE2MjYx MzY0Njc3MzY2NzA1NjUwCiAgICAgICAgbnBhcml0eT0xCiAgICAgICAgbWV0YXNsYWJfYXJyYXk9 MTQKICAgICAgICBtZXRhc2xhYl9zaGlmdD0zMgogICAgICAgIGFzaGlmdD05CiAgICAgICAgYXNp emU9NjAwMTE4ODE0MzEwNAogICAgICAgIGlzX2xvZz0wCiAgICAgICAgY2hpbGRyZW5bMF0KICAg ICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0wCiAgICAgICAgICAg ICAgICBndWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYv YWQxNCcKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMz NwogICAgICAgIGNoaWxkcmVuWzFdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAg ICAgICAgICAgaWQ9MQogICAgICAgICAgICAgICAgZ3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAog ICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDYnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNr PTAKICAgICAgICAgICAgICAgIERUTD0zMzYKICAgICAgICBjaGlsZHJlblsyXQogICAgICAgICAg ICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTIKICAgICAgICAgICAgICAgIGd1 aWQ9NjgxNzIwMzkxNzIzNDI5NzUwMwogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDEyJwog ICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM1CiAgICAg ICAgY2hpbGRyZW5bM10KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAg ICBpZD0zCiAgICAgICAgICAgICAgICBndWlkPTY1MDQ2MTM0NDAxNTE2NTk4NDgKICAgICAgICAg ICAgICAgIHBhdGg9Jy9kZXYvYWQ0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAg ICAgICAgICAgICBEVEw9MzM0Ci0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tCkxBQkVMIDEKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0KICAgIHZlcnNpb249MTMKICAgIG5hbWU9J3N0b3JhZ2UnCiAgICBzdGF0ZT0wCiAgICB0eGc9 MzY4ODk5NQogICAgcG9vbF9ndWlkPTU1NzgwMjM0NzYwOTAyNjc1MTcKICAgIGhvc3RpZD0yMjky NDUyMjcwCiAgICBob3N0bmFtZT0nbW9ub2xpdGgnCiAgICB0b3BfZ3VpZD0xNjI2MTM2NDY3NzM2 NjcwNTY1MAogICAgZ3VpZD02ODE3MjAzOTE3MjM0Mjk3NTAzCiAgICB2ZGV2X3RyZWUKICAgICAg ICB0eXBlPSdyYWlkeicKICAgICAgICBpZD0wCiAgICAgICAgZ3VpZD0xNjI2MTM2NDY3NzM2Njcw NTY1MAogICAgICAgIG5wYXJpdHk9MQogICAgICAgIG1ldGFzbGFiX2FycmF5PTE0CiAgICAgICAg bWV0YXNsYWJfc2hpZnQ9MzIKICAgICAgICBhc2hpZnQ9OQogICAgICAgIGFzaXplPTYwMDExODgx NDMxMDQKICAgICAgICBpc19sb2c9MAogICAgICAgIGNoaWxkcmVuWzBdCiAgICAgICAgICAgICAg ICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MAogICAgICAgICAgICAgICAgZ3VpZD00 Njk5OTg4MzMxMzM1ODk5MTE2CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTQnCiAgICAg ICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzcKICAgICAgICBj aGlsZHJlblsxXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlk PTEKICAgICAgICAgICAgICAgIGd1aWQ9MTI0ODU0MTY0NjIyMzE3MTU3NzAKICAgICAgICAgICAg ICAgIHBhdGg9Jy9kZXYvYWQ2JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAg ICAgICAgICBEVEw9MzM2CiAgICAgICAgY2hpbGRyZW5bMl0KICAgICAgICAgICAgICAgIHR5cGU9 J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0yCiAgICAgICAgICAgICAgICBndWlkPTY4MTcyMDM5 MTcyMzQyOTc1MDMKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxMicKICAgICAgICAgICAg ICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNQogICAgICAgIGNoaWxkcmVu WzNdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MwogICAg ICAgICAgICAgICAgZ3VpZD02NTA0NjEzNDQwMTUxNjU5ODQ4CiAgICAgICAgICAgICAgICBwYXRo PScvZGV2L2FkNCcKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAg RFRMPTMzNAotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpMQUJF TCAyCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiAgICB2ZXJz aW9uPTEzCiAgICBuYW1lPSdzdG9yYWdlJwogICAgc3RhdGU9MAogICAgdHhnPTM2ODg5OTUKICAg IHBvb2xfZ3VpZD01NTc4MDIzNDc2MDkwMjY3NTE3CiAgICBob3N0aWQ9MjI5MjQ1MjI3MAogICAg aG9zdG5hbWU9J21vbm9saXRoJwogICAgdG9wX2d1aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAg IGd1aWQ9NjgxNzIwMzkxNzIzNDI5NzUwMwogICAgdmRldl90cmVlCiAgICAgICAgdHlwZT0ncmFp ZHonCiAgICAgICAgaWQ9MAogICAgICAgIGd1aWQ9MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgICAg ICBucGFyaXR5PTEKICAgICAgICBtZXRhc2xhYl9hcnJheT0xNAogICAgICAgIG1ldGFzbGFiX3No aWZ0PTMyCiAgICAgICAgYXNoaWZ0PTkKICAgICAgICBhc2l6ZT02MDAxMTg4MTQzMTA0CiAgICAg ICAgaXNfbG9nPTAKICAgICAgICBjaGlsZHJlblswXQogICAgICAgICAgICAgICAgdHlwZT0nZGlz aycKICAgICAgICAgICAgICAgIGlkPTAKICAgICAgICAgICAgICAgIGd1aWQ9NDY5OTk4ODMzMTMz NTg5OTExNgogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDE0JwogICAgICAgICAgICAgICAg d2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM3CiAgICAgICAgY2hpbGRyZW5bMV0K ICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0xCiAgICAgICAg ICAgICAgICBndWlkPTEyNDg1NDE2NDYyMjMxNzE1NzcwCiAgICAgICAgICAgICAgICBwYXRoPScv ZGV2L2FkNicKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRM PTMzNgogICAgICAgIGNoaWxkcmVuWzJdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAg ICAgICAgICAgICAgaWQ9MgogICAgICAgICAgICAgICAgZ3VpZD02ODE3MjAzOTE3MjM0Mjk3NTAz CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTInCiAgICAgICAgICAgICAgICB3aG9sZV9k aXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzUKICAgICAgICBjaGlsZHJlblszXQogICAgICAg ICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTMKICAgICAgICAgICAgICAg IGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0OAogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDQn CiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzQKLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KTEFCRUwgMwotLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogICAgdmVyc2lvbj0xMwogICAg bmFtZT0nc3RvcmFnZScKICAgIHN0YXRlPTAKICAgIHR4Zz0zNjg4OTk1CiAgICBwb29sX2d1aWQ9 NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAgaG9zdGlkPTIyOTI0NTIyNzAKICAgIGhvc3RuYW1lPSdt b25vbGl0aCcKICAgIHRvcF9ndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICBndWlkPTY4MTcy MDM5MTcyMzQyOTc1MDMKICAgIHZkZXZfdHJlZQogICAgICAgIHR5cGU9J3JhaWR6JwogICAgICAg IGlkPTAKICAgICAgICBndWlkPTE2MjYxMzY0Njc3MzY2NzA1NjUwCiAgICAgICAgbnBhcml0eT0x CiAgICAgICAgbWV0YXNsYWJfYXJyYXk9MTQKICAgICAgICBtZXRhc2xhYl9zaGlmdD0zMgogICAg ICAgIGFzaGlmdD05CiAgICAgICAgYXNpemU9NjAwMTE4ODE0MzEwNAogICAgICAgIGlzX2xvZz0w CiAgICAgICAgY2hpbGRyZW5bMF0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAg ICAgICAgICBpZD0wCiAgICAgICAgICAgICAgICBndWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAg ICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxNCcKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9 MAogICAgICAgICAgICAgICAgRFRMPTMzNwogICAgICAgIGNoaWxkcmVuWzFdCiAgICAgICAgICAg ICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MQogICAgICAgICAgICAgICAgZ3Vp ZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDYnCiAg ICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzYKICAgICAg ICBjaGlsZHJlblsyXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAg IGlkPTIKICAgICAgICAgICAgICAgIGd1aWQ9NjgxNzIwMzkxNzIzNDI5NzUwMwogICAgICAgICAg ICAgICAgcGF0aD0nL2Rldi9hZDEyJwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAg ICAgICAgICAgICBEVEw9MzM1CiAgICAgICAgY2hpbGRyZW5bM10KICAgICAgICAgICAgICAgIHR5 cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0zCiAgICAgICAgICAgICAgICBndWlkPTY1MDQ2 MTM0NDAxNTE2NTk4NDgKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQ0JwogICAgICAgICAg ICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM0Ci0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkxBQkVMIDAKLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICAgIHZlcnNpb249MTMKICAgIG5hbWU9J3N0b3Jh Z2UnCiAgICBzdGF0ZT0wCiAgICB0eGc9MzY4ODk5NQogICAgcG9vbF9ndWlkPTU1NzgwMjM0NzYw OTAyNjc1MTcKICAgIGhvc3RpZD0yMjkyNDUyMjcwCiAgICBob3N0bmFtZT0nbW9ub2xpdGgnCiAg ICB0b3BfZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgZ3VpZD00Njk5OTg4MzMxMzM1ODk5 MTE2CiAgICB2ZGV2X3RyZWUKICAgICAgICB0eXBlPSdyYWlkeicKICAgICAgICBpZD0wCiAgICAg ICAgZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAgICAgIG5wYXJpdHk9MQogICAgICAgIG1l dGFzbGFiX2FycmF5PTE0CiAgICAgICAgbWV0YXNsYWJfc2hpZnQ9MzIKICAgICAgICBhc2hpZnQ9 OQogICAgICAgIGFzaXplPTYwMDExODgxNDMxMDQKICAgICAgICBpc19sb2c9MAogICAgICAgIGNo aWxkcmVuWzBdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9 MAogICAgICAgICAgICAgICAgZ3VpZD00Njk5OTg4MzMxMzM1ODk5MTE2CiAgICAgICAgICAgICAg ICBwYXRoPScvZGV2L2FkMTQnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAg ICAgICAgIERUTD0zMzcKICAgICAgICBjaGlsZHJlblsxXQogICAgICAgICAgICAgICAgdHlwZT0n ZGlzaycKICAgICAgICAgICAgICAgIGlkPTEKICAgICAgICAgICAgICAgIGd1aWQ9MTI0ODU0MTY0 NjIyMzE3MTU3NzAKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQ2JwogICAgICAgICAgICAg ICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM2CiAgICAgICAgY2hpbGRyZW5b Ml0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0yCiAgICAg ICAgICAgICAgICBndWlkPTY4MTcyMDM5MTcyMzQyOTc1MDMKICAgICAgICAgICAgICAgIHBhdGg9 Jy9kZXYvYWQxMicKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAg RFRMPTMzNQogICAgICAgIGNoaWxkcmVuWzNdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwog ICAgICAgICAgICAgICAgaWQ9MwogICAgICAgICAgICAgICAgZ3VpZD02NTA0NjEzNDQwMTUxNjU5 ODQ4CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNCcKICAgICAgICAgICAgICAgIHdob2xl X2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNAotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLQpMQUJFTCAxCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tCiAgICB2ZXJzaW9uPTEzCiAgICBuYW1lPSdzdG9yYWdlJwogICAgc3Rh dGU9MAogICAgdHhnPTM2ODg5OTUKICAgIHBvb2xfZ3VpZD01NTc4MDIzNDc2MDkwMjY3NTE3CiAg ICBob3N0aWQ9MjI5MjQ1MjI3MAogICAgaG9zdG5hbWU9J21vbm9saXRoJwogICAgdG9wX2d1aWQ9 MTYyNjEzNjQ2NzczNjY3MDU2NTAKICAgIGd1aWQ9NDY5OTk4ODMzMTMzNTg5OTExNgogICAgdmRl dl90cmVlCiAgICAgICAgdHlwZT0ncmFpZHonCiAgICAgICAgaWQ9MAogICAgICAgIGd1aWQ9MTYy NjEzNjQ2NzczNjY3MDU2NTAKICAgICAgICBucGFyaXR5PTEKICAgICAgICBtZXRhc2xhYl9hcnJh eT0xNAogICAgICAgIG1ldGFzbGFiX3NoaWZ0PTMyCiAgICAgICAgYXNoaWZ0PTkKICAgICAgICBh c2l6ZT02MDAxMTg4MTQzMTA0CiAgICAgICAgaXNfbG9nPTAKICAgICAgICBjaGlsZHJlblswXQog ICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTAKICAgICAgICAg ICAgICAgIGd1aWQ9NDY5OTk4ODMzMTMzNTg5OTExNgogICAgICAgICAgICAgICAgcGF0aD0nL2Rl di9hZDE0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9 MzM3CiAgICAgICAgY2hpbGRyZW5bMV0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAg ICAgICAgICAgICBpZD0xCiAgICAgICAgICAgICAgICBndWlkPTEyNDg1NDE2NDYyMjMxNzE1Nzcw CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkNicKICAgICAgICAgICAgICAgIHdob2xlX2Rp c2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNgogICAgICAgIGNoaWxkcmVuWzJdCiAgICAgICAg ICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MgogICAgICAgICAgICAgICAg Z3VpZD02ODE3MjAzOTE3MjM0Mjk3NTAzCiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTIn CiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzUKICAg ICAgICBjaGlsZHJlblszXQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAg ICAgIGlkPTMKICAgICAgICAgICAgICAgIGd1aWQ9NjUwNDYxMzQ0MDE1MTY1OTg0OAogICAgICAg ICAgICAgICAgcGF0aD0nL2Rldi9hZDQnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAg ICAgICAgICAgICAgIERUTD0zMzQKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0KTEFCRUwgMgotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLQogICAgdmVyc2lvbj0xMwogICAgbmFtZT0nc3RvcmFnZScKICAgIHN0YXRlPTAKICAgIHR4 Zz0zNjg4OTk1CiAgICBwb29sX2d1aWQ9NTU3ODAyMzQ3NjA5MDI2NzUxNwogICAgaG9zdGlkPTIy OTI0NTIyNzAKICAgIGhvc3RuYW1lPSdtb25vbGl0aCcKICAgIHRvcF9ndWlkPTE2MjYxMzY0Njc3 MzY2NzA1NjUwCiAgICBndWlkPTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAgIHZkZXZfdHJlZQogICAg ICAgIHR5cGU9J3JhaWR6JwogICAgICAgIGlkPTAKICAgICAgICBndWlkPTE2MjYxMzY0Njc3MzY2 NzA1NjUwCiAgICAgICAgbnBhcml0eT0xCiAgICAgICAgbWV0YXNsYWJfYXJyYXk9MTQKICAgICAg ICBtZXRhc2xhYl9zaGlmdD0zMgogICAgICAgIGFzaGlmdD05CiAgICAgICAgYXNpemU9NjAwMTE4 ODE0MzEwNAogICAgICAgIGlzX2xvZz0wCiAgICAgICAgY2hpbGRyZW5bMF0KICAgICAgICAgICAg ICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0wCiAgICAgICAgICAgICAgICBndWlk PTQ2OTk5ODgzMzEzMzU4OTkxMTYKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxNCcKICAg ICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNwogICAgICAg IGNoaWxkcmVuWzFdCiAgICAgICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAg aWQ9MQogICAgICAgICAgICAgICAgZ3VpZD0xMjQ4NTQxNjQ2MjIzMTcxNTc3MAogICAgICAgICAg ICAgICAgcGF0aD0nL2Rldi9hZDYnCiAgICAgICAgICAgICAgICB3aG9sZV9kaXNrPTAKICAgICAg ICAgICAgICAgIERUTD0zMzYKICAgICAgICBjaGlsZHJlblsyXQogICAgICAgICAgICAgICAgdHlw ZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTIKICAgICAgICAgICAgICAgIGd1aWQ9NjgxNzIw MzkxNzIzNDI5NzUwMwogICAgICAgICAgICAgICAgcGF0aD0nL2Rldi9hZDEyJwogICAgICAgICAg ICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBEVEw9MzM1CiAgICAgICAgY2hpbGRy ZW5bM10KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAgICAgICAgICAgICAgICBpZD0zCiAg ICAgICAgICAgICAgICBndWlkPTY1MDQ2MTM0NDAxNTE2NTk4NDgKICAgICAgICAgICAgICAgIHBh dGg9Jy9kZXYvYWQ0JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAg ICBEVEw9MzM0Ci0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkxB QkVMIDMKLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KICAgIHZl cnNpb249MTMKICAgIG5hbWU9J3N0b3JhZ2UnCiAgICBzdGF0ZT0wCiAgICB0eGc9MzY4ODk5NQog ICAgcG9vbF9ndWlkPTU1NzgwMjM0NzYwOTAyNjc1MTcKICAgIGhvc3RpZD0yMjkyNDUyMjcwCiAg ICBob3N0bmFtZT0nbW9ub2xpdGgnCiAgICB0b3BfZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAog ICAgZ3VpZD00Njk5OTg4MzMxMzM1ODk5MTE2CiAgICB2ZGV2X3RyZWUKICAgICAgICB0eXBlPSdy YWlkeicKICAgICAgICBpZD0wCiAgICAgICAgZ3VpZD0xNjI2MTM2NDY3NzM2NjcwNTY1MAogICAg ICAgIG5wYXJpdHk9MQogICAgICAgIG1ldGFzbGFiX2FycmF5PTE0CiAgICAgICAgbWV0YXNsYWJf c2hpZnQ9MzIKICAgICAgICBhc2hpZnQ9OQogICAgICAgIGFzaXplPTYwMDExODgxNDMxMDQKICAg ICAgICBpc19sb2c9MAogICAgICAgIGNoaWxkcmVuWzBdCiAgICAgICAgICAgICAgICB0eXBlPSdk aXNrJwogICAgICAgICAgICAgICAgaWQ9MAogICAgICAgICAgICAgICAgZ3VpZD00Njk5OTg4MzMx MzM1ODk5MTE2CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2FkMTQnCiAgICAgICAgICAgICAg ICB3aG9sZV9kaXNrPTAKICAgICAgICAgICAgICAgIERUTD0zMzcKICAgICAgICBjaGlsZHJlblsx XQogICAgICAgICAgICAgICAgdHlwZT0nZGlzaycKICAgICAgICAgICAgICAgIGlkPTEKICAgICAg ICAgICAgICAgIGd1aWQ9MTI0ODU0MTY0NjIyMzE3MTU3NzAKICAgICAgICAgICAgICAgIHBhdGg9 Jy9kZXYvYWQ2JwogICAgICAgICAgICAgICAgd2hvbGVfZGlzaz0wCiAgICAgICAgICAgICAgICBE VEw9MzM2CiAgICAgICAgY2hpbGRyZW5bMl0KICAgICAgICAgICAgICAgIHR5cGU9J2Rpc2snCiAg ICAgICAgICAgICAgICBpZD0yCiAgICAgICAgICAgICAgICBndWlkPTY4MTcyMDM5MTcyMzQyOTc1 MDMKICAgICAgICAgICAgICAgIHBhdGg9Jy9kZXYvYWQxMicKICAgICAgICAgICAgICAgIHdob2xl X2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNQogICAgICAgIGNoaWxkcmVuWzNdCiAgICAg ICAgICAgICAgICB0eXBlPSdkaXNrJwogICAgICAgICAgICAgICAgaWQ9MwogICAgICAgICAgICAg ICAgZ3VpZD02NTA0NjEzNDQwMTUxNjU5ODQ4CiAgICAgICAgICAgICAgICBwYXRoPScvZGV2L2Fk NCcKICAgICAgICAgICAgICAgIHdob2xlX2Rpc2s9MAogICAgICAgICAgICAgICAgRFRMPTMzNAo= --001485f6d73ac76cd604742da100-- From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 18:57:06 2009 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C7EB91065679 for ; Tue, 22 Sep 2009 18:57:06 +0000 (UTC) (envelope-from scf@FreeBSD.org) Received: from mail.farley.org (mail.farley.org [IPv6:2001:470:1f0f:20:2::11]) by mx1.freebsd.org (Postfix) with ESMTP id 776938FC12 for ; Tue, 22 Sep 2009 18:57:06 +0000 (UTC) Received: from thor.farley.org (HPooka@thor.farley.org [IPv6:2001:470:1f0f:20:1::5]) by mail.farley.org (8.14.3/8.14.3) with ESMTP id n8MIv5qD083041 for ; Tue, 22 Sep 2009 13:57:05 -0500 (CDT) (envelope-from scf@FreeBSD.org) Date: Tue, 22 Sep 2009 13:57:05 -0500 (CDT) From: "Sean C. Farley" To: freebsd-fs@FreeBSD.org Message-ID: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Spam-Status: No, score=-2.8 required=4.0 tests=AWL,BAYES_00,NO_RELAYS autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on mail.farley.org Cc: Subject: Proper gmirror install X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 18:57:06 -0000 I posted this about a week ago on freebsd-geom, but this list may be more appropriate for this. I have been experimenting with installing FreeBSD using only gpart from the 8.0-BETA4 DVD. It has been helping my understanding of GPT vs. MBR. Now, I would like to verify the setup[1] I have made and the messages I am getting from it. The basic setup is for both disks: Slice Type Size ad0s1 Windows7 system 100MB ad0s2 Windows7 20GB gm0s3 FreeBSD (non-swap) 20GB ad0s3 and ad1s3 ad0s4 FreeBSD (swap) 2GB I am creating the mirror prior to creating the BSD label. I also take it from this posting[2] that it is the preferred method. Everything appears correct, however, I am getting these messages: GEOM: ad0s3: geometry does not match label (255h,63s != 16h,63s). GEOM: ad0s3: media size does not match label. GEOM: ad1s3: geometry does not match label (255h,63s != 16h,63s) GEOM: ad1s3: media size does not match label. Questions: 1. Is this due to the BSD label being within the mirror and is considered safe? 2. Am I correct in my understanding that having the BSD label within the mirror takes care of the need to hardcode the provider's name and/or subtracting one from the c: partition? 3. Other than not being able to boot directly from those slices (untried; maybe not true) as opposed to the mirrored slice, is there any other concern doing it this way? 4. gpart allows more than four slices to be created. Are those primary, extended or something else? 5. Any other suggestions? I plan on putting this example on the wiki once it is verified to be correct. BTW, I must say I like using gpart after getting used to it. It would be nice if it could also handle creating and maintaining a hybrid MBR/GPT setup. Sean 1. http://people.freebsd.org/~scf/gmirror-install.txt 2. http://lists.freebsd.org/pipermail/freebsd-current/2009-June/008638.html -- scf@FreeBSD.org From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 19:30:31 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 978C010656A5; Tue, 22 Sep 2009 19:30:31 +0000 (UTC) (envelope-from ktouet@gmail.com) Received: from mail-yw0-f187.google.com (mail-yw0-f187.google.com [209.85.211.187]) by mx1.freebsd.org (Postfix) with ESMTP id 451FD8FC1B; Tue, 22 Sep 2009 19:30:31 +0000 (UTC) Received: by ywh17 with SMTP id 17so67756ywh.3 for ; Tue, 22 Sep 2009 12:30:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=O/TPJq9Tu9laBlTpRr0ogaxBYU6NWZULSwK84bw4dyQ=; b=XY3BYWsNiJ4NU2NqJM/wKGEOkcupUhur0pJhnKoNbuJYgq8nGNkXCphKkIfIAF6luc YKi4ecP/Ee3DbJ3IU8MIO2bqUpgCVW4PfXLVqvKEblHVdDhzLRjnS01UeLtzDtNRspmm Cs5aeGeZ9qO92ulnZryjV9CUTtH3OqnSSao9g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=FiGpaGlh2CKOALzbk4znCYs9ks4IclgTMyI+PlwaKFYLyh4QfumpNGrVDn/JDl9Deo OIUShsHOmpxisSa6aq3DyPmt3KMs/FFx2TT9JJrPOho5vWLsghJ+TXIBEIiskZtRDqra bmD+c4cRvIKPg8CQa9VjBmm/YW/AofMIsIhIY= MIME-Version: 1.0 Received: by 10.100.130.11 with SMTP id c11mr1429943and.97.1253647830285; Tue, 22 Sep 2009 12:30:30 -0700 (PDT) In-Reply-To: <20090922125625.GJ6038@garage.freebsd.pl> References: <2a5e326f0909201500w1513aeb5ra644f1c748e22f34@mail.gmail.com> <20090922125625.GJ6038@garage.freebsd.pl> Date: Tue, 22 Sep 2009 13:30:30 -0600 Message-ID: <2a5e326f0909221230m6c7e4828md5f70a5ac6c7892b@mail.gmail.com> From: Kurt Touet To: Pawel Jakub Dawidek Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-fs@freebsd.org Subject: Re: ZFS - Unable to offline drive in raidz1 based pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 19:30:31 -0000 On Tue, Sep 22, 2009 at 6:56 AM, Pawel Jakub Dawidek wrot= e: > > Could you send the output of: > > =A0 =A0 =A0 =A0# apply "zdb -l /dev/%1" ad{4,6,12,14} > > -- > Pawel Jakub Dawidek =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://ww= w.wheel.pl > pjd@FreeBSD.org =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http:= //www.FreeBSD.org > FreeBSD committer =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Am I Ev= il? Yes, I Am! > I was looking back at the thread, and realized that you had replied to my first message and not the subsequent one (where I had successfully scrubbed and resilvered the drive) -- so the debug output was from the properly resilvered array. Although the one question that still stands (for me), is how the system would have reported itself as healthy after I successfully reattached the failing driving. It strikes me as the type of situation where a checksum error or degraded status should appear. Am I wrong in thinking that, or is there another way in which this could be detected? Looking at James comment, if the one drive had an older txtag, should that have generated a non-healthy state? Cheers, -kurt From owner-freebsd-fs@FreeBSD.ORG Tue Sep 22 23:15:21 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1D230106566C; Tue, 22 Sep 2009 23:15:21 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id E8BE88FC08; Tue, 22 Sep 2009 23:15:20 +0000 (UTC) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8MNFKlE067718; Tue, 22 Sep 2009 23:15:20 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8MNFKJw067714; Tue, 22 Sep 2009 23:15:20 GMT (envelope-from linimon) Date: Tue, 22 Sep 2009 23:15:20 GMT Message-Id: <200909222315.n8MNFKJw067714@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/139059: [zfs] zfs(64bit) nfs server fails open(..., O_WRONLY|O_CREAT|O_EXCL, ...) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2009 23:15:21 -0000 Old Synopsis: zfs(64bit) nfs server fails open(..., O_WRONLY|O_CREAT|O_EXCL, ...) New Synopsis: [zfs] zfs(64bit) nfs server fails open(..., O_WRONLY|O_CREAT|O_EXCL, ...) Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Tue Sep 22 23:15:08 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=139059 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 05:43:49 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 717441065670; Wed, 23 Sep 2009 05:43:48 +0000 (UTC) (envelope-from linimon@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 632848FC12; Wed, 23 Sep 2009 05:43:48 +0000 (UTC) Received: from freefall.freebsd.org (linimon@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8N5hmlb059767; Wed, 23 Sep 2009 05:43:48 GMT (envelope-from linimon@freefall.freebsd.org) Received: (from linimon@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8N5hmLR059763; Wed, 23 Sep 2009 05:43:48 GMT (envelope-from linimon) Date: Wed, 23 Sep 2009 05:43:48 GMT Message-Id: <200909230543.n8N5hmLR059763@freefall.freebsd.org> To: linimon@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: linimon@FreeBSD.org Cc: Subject: Re: kern/139072: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 05:43:49 -0000 Old Synopsis: zfs marked as production ready but it used a deprecated checksum algorithm New Synopsis: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: linimon Responsible-Changed-When: Wed Sep 23 05:43:28 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). http://www.freebsd.org/cgi/query-pr.cgi?pr=139072 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 08:11:57 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9D3971065742; Wed, 23 Sep 2009 08:11:57 +0000 (UTC) (envelope-from gavin@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 750A78FC08; Wed, 23 Sep 2009 08:11:57 +0000 (UTC) Received: from freefall.freebsd.org (gavin@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8N8Bv3J039566; Wed, 23 Sep 2009 08:11:57 GMT (envelope-from gavin@freefall.freebsd.org) Received: (from gavin@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8N8Bvfp039562; Wed, 23 Sep 2009 08:11:57 GMT (envelope-from gavin) Date: Wed, 23 Sep 2009 08:11:57 GMT Message-Id: <200909230811.n8N8Bvfp039562@freefall.freebsd.org> To: gavin@FreeBSD.org, freebsd-bugs@FreeBSD.org, freebsd-fs@FreeBSD.org From: gavin@FreeBSD.org Cc: Subject: Re: kern/139076: [zfs] ZFS file system has SysV group ownership semantics not BSD X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 08:11:57 -0000 Synopsis: [zfs] ZFS file system has SysV group ownership semantics not BSD Responsible-Changed-From-To: freebsd-bugs->freebsd-fs Responsible-Changed-By: gavin Responsible-Changed-When: Wed Sep 23 08:10:56 UTC 2009 Responsible-Changed-Why: Over to maintainer(s). Not sure there's anything that can be done about this, other than documenting it. http://www.freebsd.org/cgi/query-pr.cgi?pr=139076 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 09:18:38 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D529310656C8; Wed, 23 Sep 2009 09:18:38 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id AC8A68FC1F; Wed, 23 Sep 2009 09:18:38 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8N9IctC003512; Wed, 23 Sep 2009 09:18:38 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8N9IcQH003508; Wed, 23 Sep 2009 09:18:38 GMT (envelope-from pjd) Date: Wed, 23 Sep 2009 09:18:38 GMT Message-Id: <200909230918.n8N9IcQH003508@freefall.freebsd.org> To: sean@gothic.net.au, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/139076: [zfs] ZFS file system has SysV group ownership semantics not BSD X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 09:18:38 -0000 Synopsis: [zfs] ZFS file system has SysV group ownership semantics not BSD State-Changed-From-To: open->patched State-Changed-By: pjd State-Changed-When: śro 23 wrz 2009 09:15:31 UTC State-Changed-Why: Fix committed to HEAD. Thank you for the report! Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: śro 23 wrz 2009 09:15:31 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=139076 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 09:19:12 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E29421065695; Wed, 23 Sep 2009 09:19:12 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id BAE3F8FC24; Wed, 23 Sep 2009 09:19:12 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8N9JC7m003621; Wed, 23 Sep 2009 09:19:12 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8N9JC2N003617; Wed, 23 Sep 2009 09:19:12 GMT (envelope-from pjd) Date: Wed, 23 Sep 2009 09:19:12 GMT Message-Id: <200909230919.n8N9JC2N003617@freefall.freebsd.org> To: pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/139059: [zfs] zfs(64bit) nfs server fails open(..., O_WRONLY|O_CREAT|O_EXCL, ...) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 09:19:13 -0000 Synopsis: [zfs] zfs(64bit) nfs server fails open(..., O_WRONLY|O_CREAT|O_EXCL, ...) Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: śro 23 wrz 2009 09:18:57 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=139059 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 09:20:19 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 121BF106566B; Wed, 23 Sep 2009 09:20:19 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id DEC088FC2C; Wed, 23 Sep 2009 09:20:18 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8N9KIWU005539; Wed, 23 Sep 2009 09:20:18 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8N9KIJ6005528; Wed, 23 Sep 2009 09:20:18 GMT (envelope-from pjd) Date: Wed, 23 Sep 2009 09:20:18 GMT Message-Id: <200909230920.n8N9KIJ6005528@freefall.freebsd.org> To: pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/139072: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 09:20:19 -0000 Synopsis: [zfs] zfs marked as production ready but it used a deprecated checksum algorithm Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: śro 23 wrz 2009 09:19:57 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=139072 From owner-freebsd-fs@FreeBSD.ORG Wed Sep 23 13:08:29 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EE0401065679; Wed, 23 Sep 2009 13:08:29 +0000 (UTC) (envelope-from gavin@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id C39CD8FC08; Wed, 23 Sep 2009 13:08:29 +0000 (UTC) Received: from freefall.freebsd.org (gavin@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8ND8TWY035377; Wed, 23 Sep 2009 13:08:29 GMT (envelope-from gavin@freefall.freebsd.org) Received: (from gavin@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8ND8T8l035372; Wed, 23 Sep 2009 13:08:29 GMT (envelope-from gavin) Date: Wed, 23 Sep 2009 13:08:29 GMT Message-Id: <200909231308.n8ND8T8l035372@freefall.freebsd.org> To: klaas@kite.ping.de, gavin@FreeBSD.org, freebsd-fs@FreeBSD.org From: gavin@FreeBSD.org Cc: Subject: Re: usb/112640: [ext2fs] [hang] Kernel freezes when writing a file to an ex2fs filesystem on a usb disk X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Sep 2009 13:08:30 -0000 Synopsis: [ext2fs] [hang] Kernel freezes when writing a file to an ex2fs filesystem on a usb disk State-Changed-From-To: open->feedback State-Changed-By: gavin State-Changed-When: Wed Sep 23 13:07:06 UTC 2009 State-Changed-Why: Submitter was asked for feedback http://www.freebsd.org/cgi/query-pr.cgi?pr=112640 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 25 18:28:56 2009 Return-Path: Delivered-To: freebsd-fs@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4DA33106568B; Fri, 25 Sep 2009 18:28:56 +0000 (UTC) (envelope-from pjd@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 24ABF8FC08; Fri, 25 Sep 2009 18:28:56 +0000 (UTC) Received: from freefall.freebsd.org (pjd@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.3/8.14.3) with ESMTP id n8PISuIR031846; Fri, 25 Sep 2009 18:28:56 GMT (envelope-from pjd@freefall.freebsd.org) Received: (from pjd@localhost) by freefall.freebsd.org (8.14.3/8.14.3/Submit) id n8PISu6Z031842; Fri, 25 Sep 2009 18:28:56 GMT (envelope-from pjd) Date: Fri, 25 Sep 2009 18:28:56 GMT Message-Id: <200909251828.n8PISu6Z031842@freefall.freebsd.org> To: nwf@cs.jhu.edu, pjd@FreeBSD.org, freebsd-fs@FreeBSD.org, pjd@FreeBSD.org From: pjd@FreeBSD.org Cc: Subject: Re: kern/139039: [zfs] zpool scrub makes system unbearably slow X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2009 18:28:56 -0000 Synopsis: [zfs] zpool scrub makes system unbearably slow State-Changed-From-To: open->feedback State-Changed-By: pjd State-Changed-When: ptk 25 wrz 2009 18:27:48 UTC State-Changed-Why: Could you tell which threads are consuming most CPU time? Pasting first few lines from 'top -SH' should be enough. Responsible-Changed-From-To: freebsd-fs->pjd Responsible-Changed-By: pjd Responsible-Changed-When: ptk 25 wrz 2009 18:27:48 UTC Responsible-Changed-Why: I'll take this one. http://www.freebsd.org/cgi/query-pr.cgi?pr=139039 From owner-freebsd-fs@FreeBSD.ORG Fri Sep 25 19:28:16 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0054F10656AC for ; Fri, 25 Sep 2009 19:28:15 +0000 (UTC) (envelope-from sullrich@gmail.com) Received: from ey-out-2122.google.com (ey-out-2122.google.com [74.125.78.25]) by mx1.freebsd.org (Postfix) with ESMTP id 45A808FC1A for ; Fri, 25 Sep 2009 19:28:15 +0000 (UTC) Received: by ey-out-2122.google.com with SMTP id 4so638406eyf.9 for ; Fri, 25 Sep 2009 12:28:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:from:date:message-id :subject:to:content-type; bh=184vxCTxgfkBGujMtbKHAN41RmZHsg3mN5ZEiWG8N5I=; b=bP6GDq3t6gBzFZ+MW8+I3rNbi6tYFD/XJZ+j72O4UU8M6jrqYlL4GWgGuJ5MG3MKes U4LS3d482xSxw0Rpzho18V+1xsj9ijoGJArL/PfArJlwfcvQwUn39WhbB+Ju/yK5YAgR DWmmxJ8s2tGSeSKdug+67tgOKExH9F3HaNwzQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:from:date:message-id:subject:to:content-type; b=Znrs28dF6GWziX1BlPcO66avLN/n+QmWf02ER8hCMGgmqpuS+ihKms1Nx6nO0eNZO5 txDKbEp36AkWxcDXRAKM7S8QNoFPHl3p/aYgqAYtfoWwfSYnYVtwnXOmUQzm8cKbewnQ LJRbzbSvNxevnLRsNsKgP2rk9+endjal7f0wA= MIME-Version: 1.0 Received: by 10.210.156.7 with SMTP id d7mr9007ebe.16.1253905580197; Fri, 25 Sep 2009 12:06:20 -0700 (PDT) From: Scott Ullrich Date: Fri, 25 Sep 2009 15:06:00 -0400 Message-ID: To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: Slow disk write IO with ZFS / NFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2009 19:28:16 -0000 Hello, Now that ZFS has been declared "Production ready" I have been playing around with RC1 + NFS + ZFS. It seems that write speeds when using ZFS + NFS are pretty poor (on average 5 megabyte a second). When I bench the ZFS pool without NFS I can see up to 175 megabytes a second write speed. The system in question is a Dell 2850 with 6 ULTRA 300 SCSI 10K RPM drives (Dual 3.4 ghz XEON) running RC1/AMD64: FreeBSD freebsd8.cre8.com 8.0-BETA4 FreeBSD 8.0-BETA4 #2: Fri Sep 25 10:55:46 UTC 2009 sullrich@freebsd8.cre8.com:/usr/obj/usr/src/sys/GENERIC amd64 NFS is setup like: /etc/rc.conf: rpcbind_enable="YES" rpc_lockd_enable="YES" rpc_statd_enable="YES" nfs_server_enable="YES" /etc/exports: /vmfs -maproot=root -network 10.0.250.0 -mask 255.255.255. Does anyone have any pointers on how to speed this up without putting the data in jeopardy during a power failure, etc (ie: leaving ZIL on). Thanks in advance, Scott From owner-freebsd-fs@FreeBSD.ORG Fri Sep 25 19:56:51 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB560106566C for ; Fri, 25 Sep 2009 19:56:51 +0000 (UTC) (envelope-from nwf@cs.jhu.edu) Received: from blaze.cs.jhu.edu (blaze.cs.jhu.edu [128.220.13.50]) by mx1.freebsd.org (Postfix) with ESMTP id 9D68C8FC13 for ; Fri, 25 Sep 2009 19:56:51 +0000 (UTC) Received: from gradx.cs.jhu.edu (gradx.cs.jhu.edu [128.220.13.52]) by blaze.cs.jhu.edu (8.14.3/8.14.3) with ESMTP id n8PJLuhm002406 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Fri, 25 Sep 2009 15:21:56 -0400 (EDT) Received: from gradx.cs.jhu.edu (localhost.localdomain [127.0.0.1]) by gradx.cs.jhu.edu (8.14.2/8.13.1) with ESMTP id n8PJLuWK008614 for ; Fri, 25 Sep 2009 15:21:56 -0400 Received: (from nwf@localhost) by gradx.cs.jhu.edu (8.14.2/8.13.8/Submit) id n8PJLuBD008613 for freebsd-fs@freebsd.org; Fri, 25 Sep 2009 15:21:56 -0400 Date: Fri, 25 Sep 2009 15:21:56 -0400 From: Nathaniel W Filardo To: freebsd-fs@freebsd.org Message-ID: <20090925192156.GF22220@gradx.cs.jhu.edu> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="c+qGut8k13HZAIeS" Content-Disposition: inline In-Reply-To: <200909251828.n8PISu6Z031842@freefall.freebsd.org> User-Agent: Mutt/1.5.18 (2008-05-17) Subject: Re: kern/139039: [zfs] zpool scrub makes system unbearably slow X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2009 19:56:52 -0000 --c+qGut8k13HZAIeS Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Sep 25, 2009 at 06:28:56PM +0000, pjd@freebsd.org wrote: > Synopsis: [zfs] zpool scrub makes system unbearably slow >=20 > State-Changed-From-To: open->feedback > State-Changed-By: pjd > State-Changed-When: ptk 25 wrz 2009 18:27:48 UTC > State-Changed-Why:=20 > Could you tell which threads are consuming most CPU time? > Pasting first few lines from 'top -SH' should be enough. >=20 >=20 > Responsible-Changed-From-To: freebsd-fs->pjd > Responsible-Changed-By: pjd > Responsible-Changed-When: ptk 25 wrz 2009 18:27:48 UTC > Responsible-Changed-Why:=20 > I'll take this one. >=20 > http://www.freebsd.org/cgi/query-pr.cgi?pr=3D139039 Thanks for looking at this. The system here is trying to build OpenLDAP in a jail, but that isn't frequently in the top. Typical output is... hydra# top -jSHP 267 processes: 15 running, 236 sleeping, 16 waiting CPU 0: 0.3% user, 0.0% nice, 97.4% system, 2.3% interrupt, 0.0% idle CPU 1: 10.7% user, 0.0% nice, 44.4% system, 1.8% interrupt, 43.0% idle Mem: 147M Active, 242M Inact, 926M Wired, 4008K Cache, 213M Buf, 672M Free Swap: 4096M Total, 4096M Free PID JID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 0 root 171 ki31 0K 64K RUN 1 376:41 57.18% {idle: cp= u1} 0 0 root -16 0 0K 3520K CPU0 0 6:04 13.96% {spa_zio_= 7} 0 0 root -16 0 0K 3520K - 0 5:58 13.33% {spa_zio_= 0} 0 0 root -16 0 0K 3520K - 0 5:58 13.13% {spa_zio_= 3} 0 0 root -16 0 0K 3520K - 0 6:01 13.09% {spa_zio_= 5} 0 0 root -16 0 0K 3520K RUN 0 6:01 13.04% {spa_zio_= 2} 0 0 root -16 0 0K 3520K RUN 0 6:00 13.04% {spa_zio_= 6} 0 0 root -16 0 0K 3520K - 0 5:59 12.65% {spa_zio_= 1} 0 0 root -16 0 0K 3520K - 1 6:00 12.11% {spa_zio_= 4} 42 0 root -8 - 0K 480K spa->s 0 4:50 8.54% {txg_thre= ad_enter} 4 0 root -8 - 0K 32K - 0 2:13 1.95% g_down 12 0 root -40 - 0K 544K WAIT 0 1:25 0.98% {swi2: ca= mbio} 0 0 root -16 0 0K 3520K - 0 0:24 0.20% {spa_zio_= 7} 0 0 root -16 0 0K 3520K - 1 0:23 0.20% {spa_zio_= 3} 12 0 root -64 - 0K 544K RUN 0 0:45 0.15% {vec1860:= mpt0} 0 0 root -16 0 0K 3520K - 1 0:58 0.10% {spa_zio} 12 0 root -32 - 0K 544K WAIT 0 1:58 0.05% {swi4: cl= ock} 42 0 root -8 - 0K 480K tx->tx 1 0:31 0.05% {txg_thre= ad_enter} 11 0 root 171 ki31 0K 64K RUN 0 774:48 0.00% {idle: cp= u0} The only thing that seems odd to me is that CPU1 is sitting essientially id= le (I have never seen CPU0 be idle when the system is scrubbing). The spa_zio= _* threads do in fact run on CPU1, but seemingly rarely. --nwf; --c+qGut8k13HZAIeS Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkq9GFQACgkQTeQabvr9Tc/2eQCgiBjhY1ELCuRCm5dxGuuNVTHR 6r4AnirXBp0M0nGdAWynt76opn56eG60 =rjj6 -----END PGP SIGNATURE----- --c+qGut8k13HZAIeS-- From owner-freebsd-fs@FreeBSD.ORG Fri Sep 25 23:51:39 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3DCE31065679 for ; Fri, 25 Sep 2009 23:51:39 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (email.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 00CF98FC18 for ; Fri, 25 Sep 2009 23:51:38 +0000 (UTC) Received: by email.octopus.com.au (Postfix, from userid 1002) id E8CD017DA9; Sat, 26 Sep 2009 09:53:13 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.20.30.102] (60.218.233.220.static.exetel.com.au [220.233.218.60]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 020461723B; Sat, 26 Sep 2009 09:53:09 +1000 (EST) Message-ID: <4ABD56D6.50301@modulus.org> Date: Sat, 26 Sep 2009 09:48:38 +1000 From: Andrew Snow User-Agent: Thunderbird 2.0.0.6 (X11/20070926) MIME-Version: 1.0 To: Scott Ullrich References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: Slow disk write IO with ZFS / NFS X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Sep 2009 23:51:39 -0000 Scott Ullrich wrote: > Does anyone have any pointers on how to speed this up without putting > the data in jeopardy during a power failure, etc (ie: leaving ZIL on). NFS writes are syncronous so ZFS is constantly syncing small datablocks to disk. By default the transaction log is stored on the same disks as the rest of the data. You might like to try an async NFS mount, but this isn't much different from just turning off the ZIL. Failing that, Add a log device to the pool, SSD or ramdisk is ideal, but separate disk HDD spindles to your data zpool also works pretty well. - Andrew