From owner-freebsd-fs@FreeBSD.ORG Sun Nov 30 21:00:09 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 8BD32F7 for ; Sun, 30 Nov 2014 21:00:09 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 629971EF for ; Sun, 30 Nov 2014 21:00:09 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sAUL09kY076961 for ; Sun, 30 Nov 2014 21:00:09 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201411302100.sAUL09kY076961@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-fs@FreeBSD.org Subject: Problem reports for freebsd-fs@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 30 Nov 2014 21:00:09 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 30 Nov 2014 21:00:09 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 136470 | [nfs] Cannot mount / in read-only, over NFS Open | 139651 | [nfs] mount(8): read-only remount of NFS volume d Open | 144447 | [zfs] sharenfs fsunshare() & fsshare_main() non f 3 problems total for which you should take action. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 13:37:19 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A31C1EE7 for ; Mon, 1 Dec 2014 13:37:19 +0000 (UTC) Received: from frv191.fwdcdn.com (frv191.fwdcdn.com [212.42.77.191]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4FE1AEE1 for ; Mon, 1 Dec 2014 13:37:19 +0000 (UTC) Received: from [10.10.2.23] (helo=frv198.fwdcdn.com) by frv191.fwdcdn.com with esmtp ID 1XvQvC-00028H-Ix for freebsd-fs@freebsd.org; Mon, 01 Dec 2014 15:21:46 +0200 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-Id:To:Subject:From:Date; bh=T7WymoUbubBGEUzLAmmrApFe4IWORQ4AjDyoVYT7CN8=; b=foNn3YXjtRFLGKCLWBJgqo7FcH34t6Z7Y+TjY7m4SuypimLrKtSQmBWoxB10iaZRagxS8KEJCADQ/0nC+wBDD+BBNI3uoyhPqy6yV6WiXgQZEnxU3/Wed2FvrUd4hlmBa4U7Z9E3z1lFMxfQNp4JjBY69ls1U0W0IWKmll/8geU=; Received: from [10.10.10.41] (helo=frv41.fwdcdn.com) by frv198.fwdcdn.com with smtp ID 1XvQv0-0001Fn-AV for freebsd-fs@freebsd.org; Mon, 01 Dec 2014 15:21:34 +0200 Date: Mon, 01 Dec 2014 15:21:33 +0200 From: Dmitriy Makarov Subject: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) To: freebsd-fs@freebsd.org X-Mailer: mail.ukr.net 5.0 Message-Id: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> MIME-Version: 1.0 Received: from supportme@ukr.net by frv41.fwdcdn.com; Mon, 01 Dec 2014 15:21:33 +0200 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 13:37:19 -0000 We have big ZFS pool (16TiB) with 36 disks that are grouped into 18 mirror devices. This weekend we were maintaining data on the pool. Two days straight 16 processes were busy reading files (to calculate checksums and things like that) Starting from the monday morning, few hours after maintainance was terminated we started to observe abnormal ZFS behaviour that was also accompanied by very very poor pool performance (many processes were blocked in zio->i). But the most strange thing is how IO is distributed between mirror devices. Normally, our 'iostat -x 1' looks like device r/s w/s kr/s kw/s qlen svc_t %b md0 0.0 5.9 0.0 0.0 0 0.0 0 da0 28.7 178.2 799.6 6748.3 1 3.8 58 da1 23.8 180.2 617.9 6748.3 1 3.4 56 da2 44.6 168.3 681.3 6733.9 1 5.2 72 da3 38.6 164.4 650.6 6240.3 1 4.9 65 da4 29.7 176.3 471.3 5935.3 0 4.1 58 da5 27.7 180.2 546.1 6391.3 1 3.9 57 da6 27.7 238.6 555.0 6714.6 0 3.7 68 da7 28.7 239.6 656.0 6714.6 0 3.3 58 da8 26.7 318.8 738.7 8304.4 0 2.5 54 da9 27.7 315.9 725.3 7769.7 0 3.0 77 da10 23.8 268.3 510.0 7663.7 0 2.6 56 da11 32.7 276.3 905.5 7697.9 0 3.4 70 da12 24.8 293.1 559.0 6222.0 2 2.3 53 da13 27.7 285.2 279.7 6058.1 1 2.9 62 da14 29.7 226.8 374.3 5733.3 0 3.2 57 da15 32.7 220.8 532.2 5538.7 1 3.3 65 da16 30.7 165.4 638.2 4537.6 1 3.8 51 da17 39.6 173.3 819.9 4884.2 1 3.2 46 da18 28.7 221.8 765.4 5659.1 1 2.6 42 da19 30.7 214.9 464.4 5417.4 0 4.6 78 da20 32.7 177.2 725.3 4732.7 1 4.0 63 da21 29.7 177.2 448.6 4722.8 0 5.3 66 da22 19.8 153.5 398.6 4168.3 0 2.5 35 da23 16.8 151.5 291.1 4243.6 1 2.9 39 da24 26.7 186.2 547.1 5018.4 1 4.4 68 da25 30.7 190.1 709.0 5096.6 1 5.0 71 da26 28.7 222.8 690.7 5251.1 0 3.0 55 da27 21.8 213.9 572.3 5248.6 0 2.8 49 da28 34.7 177.2 1096.2 5027.8 1 4.9 65 da29 36.6 175.3 1172.9 5012.0 2 4.9 63 da30 22.8 197.1 462.9 5906.6 0 2.8 51 da31 25.7 204.0 445.6 6138.3 0 3.4 62 da32 31.7 170.3 557.0 5600.6 1 4.6 58 da33 33.7 161.4 698.1 5509.5 1 4.8 60 da34 28.7 269.3 473.8 6661.6 1 5.2 77 da35 27.7 268.3 424.3 6440.8 0 5.6 75 kw/s is always distributed pretty much evenly. Now it looks mostly like this: device r/s w/s kr/s kw/s qlen svc_t %b md0 0.0 18.8 0.0 0.0 0 0.0 0 da0 35.7 0.0 1070.9 0.0 0 13.3 37 da1 38.7 0.0 1227.0 0.0 0 12.7 40 da2 25.8 0.0 920.2 0.0 0 12.0 26 da3 26.8 0.0 778.0 0.0 0 10.9 23 da4 22.8 0.0 792.4 0.0 0 14.4 25 da5 26.8 0.0 1050.5 0.0 0 13.4 27 da6 32.7 0.0 1359.3 0.0 0 17.0 41 da7 23.8 229.9 870.7 17318.1 0 3.0 55 da8 58.5 0.0 1813.7 0.0 1 12.9 56 da9 63.4 0.0 1615.0 0.0 0 12.4 61 da10 48.6 0.0 1448.0 0.0 0 16.7 55 da11 49.6 0.0 1148.2 0.0 1 16.7 60 da12 47.6 0.0 1508.4 0.0 0 14.8 46 da13 47.6 0.0 1417.7 0.0 0 17.9 55 da14 44.6 0.0 1997.5 0.0 1 15.6 49 da15 48.6 0.0 2061.4 0.0 1 14.2 47 da16 44.6 0.0 1587.7 0.0 1 16.9 51 da17 45.6 0.0 1326.1 0.0 2 15.7 55 da18 50.5 0.0 1433.6 0.0 2 16.7 57 da19 57.5 0.0 2415.8 0.0 3 20.4 70 da20 52.5 222.0 2097.1 10613.0 5 12.8 100 da21 52.5 256.7 1967.8 11498.5 3 10.6 100 da22 37.7 433.1 1342.4 12880.1 4 5.5 99 da23 42.6 359.8 2304.3 13073.8 5 7.2 101 da24 33.7 0.0 1256.7 0.0 1 15.4 40 da25 26.8 0.0 853.8 0.0 2 15.1 32 da26 23.8 0.0 343.9 0.0 1 12.4 28 da27 26.8 0.0 400.4 0.0 0 12.4 31 da28 15.9 0.0 575.3 0.0 1 11.4 17 da29 20.8 0.0 750.7 0.0 0 14.4 24 da30 37.7 0.0 952.4 0.0 0 12.6 37 da31 29.7 0.0 777.0 0.0 0 13.6 37 da32 54.5 121.9 1824.6 6514.4 7 27.7 100 da33 56.5 116.9 2017.3 6213.6 6 29.7 99 da34 42.6 0.0 1303.3 0.0 1 14.9 43 da35 45.6 0.0 1400.9 0.0 2 14.8 45 Some deviced have 0.0 kw/s for long period of time, then others and so on and so on. Here some more results: device r/s w/s kr/s kw/s qlen svc_t %b md0 0.0 37.9 0.0 0.0 0 0.0 0 da0 58.9 173.7 1983.5 4585.3 3 11.2 87 da1 49.9 162.7 1656.2 4548.4 3 14.0 95 da2 40.9 187.6 1476.5 3466.6 1 4.8 58 da3 42.9 188.6 1646.7 3466.6 0 5.3 64 da4 54.9 33.9 2222.6 1778.4 1 13.3 63 da5 53.9 37.9 2429.6 1778.4 2 12.9 68 da6 42.9 33.9 1445.1 444.6 0 10.3 45 da7 40.9 28.9 2045.9 444.6 0 12.3 43 da8 53.9 0.0 959.6 0.0 1 22.7 62 da9 29.9 0.0 665.2 0.0 1 52.1 64 da10 52.9 83.8 1845.3 2084.8 2 8.2 64 da11 44.9 103.8 1654.2 4895.2 1 8.8 71 da12 50.9 60.9 1273.0 2078.3 1 10.3 69 da13 39.9 57.9 940.1 2078.3 0 15.4 75 da14 45.9 72.9 977.0 3178.6 0 8.5 63 da15 48.9 72.9 1000.5 3178.6 0 9.6 72 da16 42.9 74.9 1187.6 2118.8 1 6.7 51 da17 48.9 82.8 1651.7 3013.0 0 5.7 52 da18 67.9 78.8 2735.5 2456.1 0 11.5 75 da19 52.9 79.8 2436.6 2456.1 0 13.1 82 da20 48.9 91.8 2623.8 1682.6 1 7.2 60 da21 52.9 92.8 1893.2 1682.6 0 7.1 61 da22 67.9 20.0 2518.0 701.1 0 13.5 79 da23 68.9 23.0 3331.8 701.1 1 13.6 77 da24 45.9 17.0 2148.7 369.8 1 11.6 47 da25 36.9 18.0 1747.5 369.8 1 12.6 46 da26 46.9 1.0 1873.3 0.5 0 21.3 55 da27 38.9 1.0 1395.7 0.5 0 34.6 58 da28 34.9 9.0 1523.5 53.9 0 14.1 39 da29 26.9 10.0 1124.8 53.9 1 13.8 28 da30 44.9 0.0 1887.2 0.0 0 18.8 50 da31 47.9 0.0 2273.0 0.0 0 20.2 49 da32 65.9 90.8 2221.6 1730.5 3 9.7 77 da33 79.8 90.8 3304.9 1730.5 1 9.9 88 da34 75.8 134.7 3638.7 3938.1 2 10.2 90 da35 49.9 209.6 1792.4 5756.0 2 8.1 85 md0 0.0 19.0 0.0 0.0 0 0.0 0 da0 38.0 194.8 1416.1 1175.8 1 10.6 100 da1 40.0 190.8 1424.6 1072.9 2 10.4 100 da2 37.0 0.0 1562.4 0.0 0 14.9 40 da3 31.0 0.0 1169.8 0.0 0 14.0 33 da4 44.0 0.0 2632.4 0.0 0 18.0 45 da5 41.0 0.0 1944.6 0.0 0 19.0 45 da6 38.0 0.0 1786.2 0.0 1 18.4 44 da7 45.0 0.0 2275.7 0.0 0 16.0 48 da8 80.9 0.0 4151.3 0.0 2 24.1 85 da9 83.9 0.0 3256.2 0.0 3 21.2 83 da10 61.9 0.0 3657.3 0.0 1 18.9 65 da11 53.9 0.0 2532.5 0.0 1 18.7 56 da12 54.9 0.0 2650.8 0.0 0 18.9 60 da13 48.0 0.0 1975.5 0.0 0 19.6 53 da14 43.0 0.0 1802.7 0.0 2 14.1 43 da15 49.0 0.0 2455.5 0.0 0 14.0 48 da16 45.0 0.0 1521.5 0.0 1 16.0 50 da17 45.0 0.0 1650.8 0.0 4 13.7 47 da18 48.0 0.0 1618.9 0.0 1 15.0 54 da19 47.0 0.0 1982.0 0.0 0 16.5 55 da20 52.9 0.0 2186.3 0.0 0 19.8 65 da21 61.9 0.0 3020.5 0.0 0 16.3 61 da22 70.9 0.0 3309.7 0.0 1 15.5 67 da23 67.9 0.0 2742.3 0.0 2 16.5 73 da24 38.0 0.0 1426.1 0.0 1 15.5 40 da25 41.0 0.0 1905.6 0.0 1 14.0 39 da26 43.0 0.0 2371.1 0.0 0 14.2 40 da27 46.0 0.0 2178.3 0.0 0 15.2 45 da28 44.0 0.0 2092.9 0.0 0 12.4 43 da29 41.0 0.0 1442.1 0.0 1 13.4 37 da30 42.0 37.0 1171.3 645.9 1 17.5 62 da31 27.0 67.9 713.8 290.7 0 16.7 64 da32 47.0 0.0 1043.5 0.0 0 13.3 43 da33 50.0 0.0 1741.3 0.0 1 15.7 57 da34 42.0 0.0 1119.9 0.0 0 18.2 55 da35 45.0 0.0 1071.4 0.0 0 15.7 55 First thing we did is tried to reboot. It took system more than 5 minutes to import the pool (normally it's a fraction of a second). Nedless to say reboot did not help a bit. What can we do about this problem? System info: FreeBSD 11.0-CURRENT #5 r260625 zpool get all disk1 NAME PROPERTY VALUE SOURCE disk1 size 16,3T - disk1 capacity 59% - disk1 altroot - default disk1 health ONLINE - disk1 guid 4909337477172007488 default disk1 version - default disk1 bootfs - default disk1 delegation on default disk1 autoreplace off default disk1 cachefile - default disk1 failmode wait default disk1 listsnapshots off default disk1 autoexpand off default disk1 dedupditto 0 default disk1 dedupratio 1.00x - disk1 free 6,56T - disk1 allocated 9,76T - disk1 readonly off - disk1 comment - default disk1 expandsize 0 - disk1 freeing 0 default disk1 feature@async_destroy enabled local disk1 feature@empty_bpobj active local disk1 feature@lz4_compress active local disk1 feature@multi_vdev_crash_dump enabled local disk1 feature@spacemap_histogram active local disk1 feature@enabled_txg active local disk1 feature@hole_birth active local disk1 feature@extensible_dataset enabled local disk1 feature@bookmarks enabled local zfs get all disk1 NAME PROPERTY VALUE SOURCE disk1 type filesystem - disk1 creation Wed Sep 18 11:47 2013 - disk1 used 9,75T - disk1 available 6,30T - disk1 referenced 9,74T - disk1 compressratio 1.63x - disk1 mounted yes - disk1 quota none default disk1 reservation none default disk1 recordsize 128K default disk1 mountpoint /......... local disk1 sharenfs off default disk1 checksum on default disk1 compression lz4 local disk1 atime off local disk1 devices on default disk1 exec off local disk1 setuid off local disk1 readonly off default disk1 jailed off default disk1 snapdir hidden default disk1 aclmode discard default disk1 aclinherit restricted default disk1 canmount on default disk1 xattr off temporary disk1 copies 1 default disk1 version 5 - disk1 utf8only off - disk1 normalization none - disk1 casesensitivity sensitive - disk1 vscan off default disk1 nbmand off default disk1 sharesmb off default disk1 refquota none default disk1 refreservation none default disk1 primarycache all default disk1 secondarycache none local disk1 usedbysnapshots 0 - disk1 usedbydataset 9,74T - disk1 usedbychildren 9,71G - disk1 usedbyrefreservation 0 - disk1 logbias latency default disk1 dedup off default disk1 mlslabel - disk1 sync standard local disk1 refcompressratio 1.63x - disk1 written 9,74T - disk1 logicalused 15,8T - disk1 logicalreferenced 15,8T - This is very severe, thanks. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 13:46:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 74F107C for ; Mon, 1 Dec 2014 13:46:52 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) by mx1.freebsd.org (Postfix) with ESMTP id 92302FCE for ; Mon, 1 Dec 2014 13:46:51 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id B65421472001; Mon, 1 Dec 2014 14:38:56 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AZuNOLBvJZH8; Mon, 1 Dec 2014 14:38:50 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 93F711472002; Mon, 1 Dec 2014 14:38:50 +0100 (CET) Message-ID: <547C6F59.9090106@internetx.com> Date: Mon, 01 Dec 2014 14:38:33 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: jg@internetx.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: Dmitriy Makarov , freebsd-fs@freebsd.org Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> In-Reply-To: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 13:46:52 -0000 one/more disks broken or tend to die soon? Am 01.12.2014 um 14:21 schrieb Dmitriy Makarov: > We have big ZFS pool (16TiB) with 36 disks that are grouped into 18 mirror devices. > > This weekend we were maintaining data on the pool. > Two days straight 16 processes were busy reading files (to calculate checksums and things like that) > > Starting from the monday morning, few hours after maintainance was terminated > we started to observe abnormal ZFS behaviour that was also accompanied by > very very poor pool performance (many processes were blocked in zio->i). > > But the most strange thing is how IO is distributed between mirror devices. > Normally, our 'iostat -x 1' looks like > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 5.9 0.0 0.0 0 0.0 0 > da0 28.7 178.2 799.6 6748.3 1 3.8 58 > da1 23.8 180.2 617.9 6748.3 1 3.4 56 > da2 44.6 168.3 681.3 6733.9 1 5.2 72 > da3 38.6 164.4 650.6 6240.3 1 4.9 65 > da4 29.7 176.3 471.3 5935.3 0 4.1 58 > da5 27.7 180.2 546.1 6391.3 1 3.9 57 > da6 27.7 238.6 555.0 6714.6 0 3.7 68 > da7 28.7 239.6 656.0 6714.6 0 3.3 58 > da8 26.7 318.8 738.7 8304.4 0 2.5 54 > da9 27.7 315.9 725.3 7769.7 0 3.0 77 > da10 23.8 268.3 510.0 7663.7 0 2.6 56 > da11 32.7 276.3 905.5 7697.9 0 3.4 70 > da12 24.8 293.1 559.0 6222.0 2 2.3 53 > da13 27.7 285.2 279.7 6058.1 1 2.9 62 > da14 29.7 226.8 374.3 5733.3 0 3.2 57 > da15 32.7 220.8 532.2 5538.7 1 3.3 65 > da16 30.7 165.4 638.2 4537.6 1 3.8 51 > da17 39.6 173.3 819.9 4884.2 1 3.2 46 > da18 28.7 221.8 765.4 5659.1 1 2.6 42 > da19 30.7 214.9 464.4 5417.4 0 4.6 78 > da20 32.7 177.2 725.3 4732.7 1 4.0 63 > da21 29.7 177.2 448.6 4722.8 0 5.3 66 > da22 19.8 153.5 398.6 4168.3 0 2.5 35 > da23 16.8 151.5 291.1 4243.6 1 2.9 39 > da24 26.7 186.2 547.1 5018.4 1 4.4 68 > da25 30.7 190.1 709.0 5096.6 1 5.0 71 > da26 28.7 222.8 690.7 5251.1 0 3.0 55 > da27 21.8 213.9 572.3 5248.6 0 2.8 49 > da28 34.7 177.2 1096.2 5027.8 1 4.9 65 > da29 36.6 175.3 1172.9 5012.0 2 4.9 63 > da30 22.8 197.1 462.9 5906.6 0 2.8 51 > da31 25.7 204.0 445.6 6138.3 0 3.4 62 > da32 31.7 170.3 557.0 5600.6 1 4.6 58 > da33 33.7 161.4 698.1 5509.5 1 4.8 60 > da34 28.7 269.3 473.8 6661.6 1 5.2 77 > da35 27.7 268.3 424.3 6440.8 0 5.6 75 > > > kw/s is always distributed pretty much evenly. > Now it looks mostly like this: > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 18.8 0.0 0.0 0 0.0 0 > da0 35.7 0.0 1070.9 0.0 0 13.3 37 > da1 38.7 0.0 1227.0 0.0 0 12.7 40 > da2 25.8 0.0 920.2 0.0 0 12.0 26 > da3 26.8 0.0 778.0 0.0 0 10.9 23 > da4 22.8 0.0 792.4 0.0 0 14.4 25 > da5 26.8 0.0 1050.5 0.0 0 13.4 27 > da6 32.7 0.0 1359.3 0.0 0 17.0 41 > da7 23.8 229.9 870.7 17318.1 0 3.0 55 > da8 58.5 0.0 1813.7 0.0 1 12.9 56 > da9 63.4 0.0 1615.0 0.0 0 12.4 61 > da10 48.6 0.0 1448.0 0.0 0 16.7 55 > da11 49.6 0.0 1148.2 0.0 1 16.7 60 > da12 47.6 0.0 1508.4 0.0 0 14.8 46 > da13 47.6 0.0 1417.7 0.0 0 17.9 55 > da14 44.6 0.0 1997.5 0.0 1 15.6 49 > da15 48.6 0.0 2061.4 0.0 1 14.2 47 > da16 44.6 0.0 1587.7 0.0 1 16.9 51 > da17 45.6 0.0 1326.1 0.0 2 15.7 55 > da18 50.5 0.0 1433.6 0.0 2 16.7 57 > da19 57.5 0.0 2415.8 0.0 3 20.4 70 > da20 52.5 222.0 2097.1 10613.0 5 12.8 100 > da21 52.5 256.7 1967.8 11498.5 3 10.6 100 > da22 37.7 433.1 1342.4 12880.1 4 5.5 99 > da23 42.6 359.8 2304.3 13073.8 5 7.2 101 > da24 33.7 0.0 1256.7 0.0 1 15.4 40 > da25 26.8 0.0 853.8 0.0 2 15.1 32 > da26 23.8 0.0 343.9 0.0 1 12.4 28 > da27 26.8 0.0 400.4 0.0 0 12.4 31 > da28 15.9 0.0 575.3 0.0 1 11.4 17 > da29 20.8 0.0 750.7 0.0 0 14.4 24 > da30 37.7 0.0 952.4 0.0 0 12.6 37 > da31 29.7 0.0 777.0 0.0 0 13.6 37 > da32 54.5 121.9 1824.6 6514.4 7 27.7 100 > da33 56.5 116.9 2017.3 6213.6 6 29.7 99 > da34 42.6 0.0 1303.3 0.0 1 14.9 43 > da35 45.6 0.0 1400.9 0.0 2 14.8 45 > > Some deviced have 0.0 kw/s for long period of time, > then others and so on and so on. > Here some more results: > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 37.9 0.0 0.0 0 0.0 0 > da0 58.9 173.7 1983.5 4585.3 3 11.2 87 > da1 49.9 162.7 1656.2 4548.4 3 14.0 95 > da2 40.9 187.6 1476.5 3466.6 1 4.8 58 > da3 42.9 188.6 1646.7 3466.6 0 5.3 64 > da4 54.9 33.9 2222.6 1778.4 1 13.3 63 > da5 53.9 37.9 2429.6 1778.4 2 12.9 68 > da6 42.9 33.9 1445.1 444.6 0 10.3 45 > da7 40.9 28.9 2045.9 444.6 0 12.3 43 > da8 53.9 0.0 959.6 0.0 1 22.7 62 > da9 29.9 0.0 665.2 0.0 1 52.1 64 > da10 52.9 83.8 1845.3 2084.8 2 8.2 64 > da11 44.9 103.8 1654.2 4895.2 1 8.8 71 > da12 50.9 60.9 1273.0 2078.3 1 10.3 69 > da13 39.9 57.9 940.1 2078.3 0 15.4 75 > da14 45.9 72.9 977.0 3178.6 0 8.5 63 > da15 48.9 72.9 1000.5 3178.6 0 9.6 72 > da16 42.9 74.9 1187.6 2118.8 1 6.7 51 > da17 48.9 82.8 1651.7 3013.0 0 5.7 52 > da18 67.9 78.8 2735.5 2456.1 0 11.5 75 > da19 52.9 79.8 2436.6 2456.1 0 13.1 82 > da20 48.9 91.8 2623.8 1682.6 1 7.2 60 > da21 52.9 92.8 1893.2 1682.6 0 7.1 61 > da22 67.9 20.0 2518.0 701.1 0 13.5 79 > da23 68.9 23.0 3331.8 701.1 1 13.6 77 > da24 45.9 17.0 2148.7 369.8 1 11.6 47 > da25 36.9 18.0 1747.5 369.8 1 12.6 46 > da26 46.9 1.0 1873.3 0.5 0 21.3 55 > da27 38.9 1.0 1395.7 0.5 0 34.6 58 > da28 34.9 9.0 1523.5 53.9 0 14.1 39 > da29 26.9 10.0 1124.8 53.9 1 13.8 28 > da30 44.9 0.0 1887.2 0.0 0 18.8 50 > da31 47.9 0.0 2273.0 0.0 0 20.2 49 > da32 65.9 90.8 2221.6 1730.5 3 9.7 77 > da33 79.8 90.8 3304.9 1730.5 1 9.9 88 > da34 75.8 134.7 3638.7 3938.1 2 10.2 90 > da35 49.9 209.6 1792.4 5756.0 2 8.1 85 > > > md0 0.0 19.0 0.0 0.0 0 0.0 0 > da0 38.0 194.8 1416.1 1175.8 1 10.6 100 > da1 40.0 190.8 1424.6 1072.9 2 10.4 100 > da2 37.0 0.0 1562.4 0.0 0 14.9 40 > da3 31.0 0.0 1169.8 0.0 0 14.0 33 > da4 44.0 0.0 2632.4 0.0 0 18.0 45 > da5 41.0 0.0 1944.6 0.0 0 19.0 45 > da6 38.0 0.0 1786.2 0.0 1 18.4 44 > da7 45.0 0.0 2275.7 0.0 0 16.0 48 > da8 80.9 0.0 4151.3 0.0 2 24.1 85 > da9 83.9 0.0 3256.2 0.0 3 21.2 83 > da10 61.9 0.0 3657.3 0.0 1 18.9 65 > da11 53.9 0.0 2532.5 0.0 1 18.7 56 > da12 54.9 0.0 2650.8 0.0 0 18.9 60 > da13 48.0 0.0 1975.5 0.0 0 19.6 53 > da14 43.0 0.0 1802.7 0.0 2 14.1 43 > da15 49.0 0.0 2455.5 0.0 0 14.0 48 > da16 45.0 0.0 1521.5 0.0 1 16.0 50 > da17 45.0 0.0 1650.8 0.0 4 13.7 47 > da18 48.0 0.0 1618.9 0.0 1 15.0 54 > da19 47.0 0.0 1982.0 0.0 0 16.5 55 > da20 52.9 0.0 2186.3 0.0 0 19.8 65 > da21 61.9 0.0 3020.5 0.0 0 16.3 61 > da22 70.9 0.0 3309.7 0.0 1 15.5 67 > da23 67.9 0.0 2742.3 0.0 2 16.5 73 > da24 38.0 0.0 1426.1 0.0 1 15.5 40 > da25 41.0 0.0 1905.6 0.0 1 14.0 39 > da26 43.0 0.0 2371.1 0.0 0 14.2 40 > da27 46.0 0.0 2178.3 0.0 0 15.2 45 > da28 44.0 0.0 2092.9 0.0 0 12.4 43 > da29 41.0 0.0 1442.1 0.0 1 13.4 37 > da30 42.0 37.0 1171.3 645.9 1 17.5 62 > da31 27.0 67.9 713.8 290.7 0 16.7 64 > da32 47.0 0.0 1043.5 0.0 0 13.3 43 > da33 50.0 0.0 1741.3 0.0 1 15.7 57 > da34 42.0 0.0 1119.9 0.0 0 18.2 55 > da35 45.0 0.0 1071.4 0.0 0 15.7 55 > > > First thing we did is tried to reboot. > It took system more than 5 minutes to import the pool (normally it's a fraction of a second). > Nedless to say reboot did not help a bit. > > What can we do about this problem? > > > System info: > FreeBSD 11.0-CURRENT #5 r260625 > > zpool get all disk1 > NAME PROPERTY VALUE SOURCE > disk1 size 16,3T - > disk1 capacity 59% - > disk1 altroot - default > disk1 health ONLINE - > disk1 guid 4909337477172007488 default > disk1 version - default > disk1 bootfs - default > disk1 delegation on default > disk1 autoreplace off default > disk1 cachefile - default > disk1 failmode wait default > disk1 listsnapshots off default > disk1 autoexpand off default > disk1 dedupditto 0 default > disk1 dedupratio 1.00x - > disk1 free 6,56T - > disk1 allocated 9,76T - > disk1 readonly off - > disk1 comment - default > disk1 expandsize 0 - > disk1 freeing 0 default > disk1 feature@async_destroy enabled local > disk1 feature@empty_bpobj active local > disk1 feature@lz4_compress active local > disk1 feature@multi_vdev_crash_dump enabled local > disk1 feature@spacemap_histogram active local > disk1 feature@enabled_txg active local > disk1 feature@hole_birth active local > disk1 feature@extensible_dataset enabled local > disk1 feature@bookmarks enabled local > > > > zfs get all disk1 > NAME PROPERTY VALUE SOURCE > disk1 type filesystem - > disk1 creation Wed Sep 18 11:47 2013 - > disk1 used 9,75T - > disk1 available 6,30T - > disk1 referenced 9,74T - > disk1 compressratio 1.63x - > disk1 mounted yes - > disk1 quota none default > disk1 reservation none default > disk1 recordsize 128K default > disk1 mountpoint /......... local > disk1 sharenfs off default > disk1 checksum on default > disk1 compression lz4 local > disk1 atime off local > disk1 devices on default > disk1 exec off local > disk1 setuid off local > disk1 readonly off default > disk1 jailed off default > disk1 snapdir hidden default > disk1 aclmode discard default > disk1 aclinherit restricted default > disk1 canmount on default > disk1 xattr off temporary > disk1 copies 1 default > disk1 version 5 - > disk1 utf8only off - > disk1 normalization none - > disk1 casesensitivity sensitive - > disk1 vscan off default > disk1 nbmand off default > disk1 sharesmb off default > disk1 refquota none default > disk1 refreservation none default > disk1 primarycache all default > disk1 secondarycache none local > disk1 usedbysnapshots 0 - > disk1 usedbydataset 9,74T - > disk1 usedbychildren 9,71G - > disk1 usedbyrefreservation 0 - > disk1 logbias latency default > disk1 dedup off default > disk1 mlslabel - > disk1 sync standard local > disk1 refcompressratio 1.63x - > disk1 written 9,74T - > disk1 logicalused 15,8T - > disk1 logicalreferenced 15,8T - > > > This is very severe, thanks. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 17:28:29 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 85A83AE8 for ; Mon, 1 Dec 2014 17:28:29 +0000 (UTC) Received: from mail-wi0-f174.google.com (mail-wi0-f174.google.com [209.85.212.174]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 121ABC78 for ; Mon, 1 Dec 2014 17:28:28 +0000 (UTC) Received: by mail-wi0-f174.google.com with SMTP id h11so25369289wiw.13 for ; Mon, 01 Dec 2014 09:28:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=gkMfEn6Gsrv2MNu2RX/0+KuixZQcjHGX/HzUujseO3U=; b=ZZDX0ei9tbdeTiDB7nu5jI31n/RjZV06FDjfJJPuNFoZi77IBVYI12wQ5xf/D8D2no TD1Nx6WWoahfQD7rV0HuPub+HYaHZvypenq9k/FDmWkitLNo2cpVVXIywCxBGOyrjo4M 654ukuUwiV3C0ACxa7gv/0gwsmBMR+U8tDxPG1NBb+w82pD4stIOgGfaXrqX7gkncYi7 ZEKoKMaCdKCRhPzoOA48ja0Hu6dRxM20fClysnAcxC2lZ4x7JJ94BfeMGuAHnfsN4z8S 3alJEcp3m5CWT8fogAlEPoKEP7tRgtM92WqfAyyjStWhjaVr+HiryH0LPcreGM7uTRw5 wYDg== X-Gm-Message-State: ALoCoQmuf9zFO65JlVtNSL67x5Mb23phOEAqG4GNYnwEDdWyZf44gzAPCKdgE3SiX0xwB0oUju0L X-Received: by 10.194.176.170 with SMTP id cj10mr19713880wjc.8.1417454906604; Mon, 01 Dec 2014 09:28:26 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id ep6sm22772673wib.0.2014.12.01.09.28.25 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 01 Dec 2014 09:28:25 -0800 (PST) Message-ID: <547CA5AA.8080105@multiplay.co.uk> Date: Mon, 01 Dec 2014 17:30:18 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> In-Reply-To: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 17:28:29 -0000 What disks? On 01/12/2014 13:21, Dmitriy Makarov wrote: > We have big ZFS pool (16TiB) with 36 disks that are grouped into 18 mirror devices. > > This weekend we were maintaining data on the pool. > Two days straight 16 processes were busy reading files (to calculate checksums and things like that) > > Starting from the monday morning, few hours after maintainance was terminated > we started to observe abnormal ZFS behaviour that was also accompanied by > very very poor pool performance (many processes were blocked in zio->i). > > But the most strange thing is how IO is distributed between mirror devices. > Normally, our 'iostat -x 1' looks like > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 5.9 0.0 0.0 0 0.0 0 > da0 28.7 178.2 799.6 6748.3 1 3.8 58 > da1 23.8 180.2 617.9 6748.3 1 3.4 56 > da2 44.6 168.3 681.3 6733.9 1 5.2 72 > da3 38.6 164.4 650.6 6240.3 1 4.9 65 > da4 29.7 176.3 471.3 5935.3 0 4.1 58 > da5 27.7 180.2 546.1 6391.3 1 3.9 57 > da6 27.7 238.6 555.0 6714.6 0 3.7 68 > da7 28.7 239.6 656.0 6714.6 0 3.3 58 > da8 26.7 318.8 738.7 8304.4 0 2.5 54 > da9 27.7 315.9 725.3 7769.7 0 3.0 77 > da10 23.8 268.3 510.0 7663.7 0 2.6 56 > da11 32.7 276.3 905.5 7697.9 0 3.4 70 > da12 24.8 293.1 559.0 6222.0 2 2.3 53 > da13 27.7 285.2 279.7 6058.1 1 2.9 62 > da14 29.7 226.8 374.3 5733.3 0 3.2 57 > da15 32.7 220.8 532.2 5538.7 1 3.3 65 > da16 30.7 165.4 638.2 4537.6 1 3.8 51 > da17 39.6 173.3 819.9 4884.2 1 3.2 46 > da18 28.7 221.8 765.4 5659.1 1 2.6 42 > da19 30.7 214.9 464.4 5417.4 0 4.6 78 > da20 32.7 177.2 725.3 4732.7 1 4.0 63 > da21 29.7 177.2 448.6 4722.8 0 5.3 66 > da22 19.8 153.5 398.6 4168.3 0 2.5 35 > da23 16.8 151.5 291.1 4243.6 1 2.9 39 > da24 26.7 186.2 547.1 5018.4 1 4.4 68 > da25 30.7 190.1 709.0 5096.6 1 5.0 71 > da26 28.7 222.8 690.7 5251.1 0 3.0 55 > da27 21.8 213.9 572.3 5248.6 0 2.8 49 > da28 34.7 177.2 1096.2 5027.8 1 4.9 65 > da29 36.6 175.3 1172.9 5012.0 2 4.9 63 > da30 22.8 197.1 462.9 5906.6 0 2.8 51 > da31 25.7 204.0 445.6 6138.3 0 3.4 62 > da32 31.7 170.3 557.0 5600.6 1 4.6 58 > da33 33.7 161.4 698.1 5509.5 1 4.8 60 > da34 28.7 269.3 473.8 6661.6 1 5.2 77 > da35 27.7 268.3 424.3 6440.8 0 5.6 75 > > > kw/s is always distributed pretty much evenly. > Now it looks mostly like this: > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 18.8 0.0 0.0 0 0.0 0 > da0 35.7 0.0 1070.9 0.0 0 13.3 37 > da1 38.7 0.0 1227.0 0.0 0 12.7 40 > da2 25.8 0.0 920.2 0.0 0 12.0 26 > da3 26.8 0.0 778.0 0.0 0 10.9 23 > da4 22.8 0.0 792.4 0.0 0 14.4 25 > da5 26.8 0.0 1050.5 0.0 0 13.4 27 > da6 32.7 0.0 1359.3 0.0 0 17.0 41 > da7 23.8 229.9 870.7 17318.1 0 3.0 55 > da8 58.5 0.0 1813.7 0.0 1 12.9 56 > da9 63.4 0.0 1615.0 0.0 0 12.4 61 > da10 48.6 0.0 1448.0 0.0 0 16.7 55 > da11 49.6 0.0 1148.2 0.0 1 16.7 60 > da12 47.6 0.0 1508.4 0.0 0 14.8 46 > da13 47.6 0.0 1417.7 0.0 0 17.9 55 > da14 44.6 0.0 1997.5 0.0 1 15.6 49 > da15 48.6 0.0 2061.4 0.0 1 14.2 47 > da16 44.6 0.0 1587.7 0.0 1 16.9 51 > da17 45.6 0.0 1326.1 0.0 2 15.7 55 > da18 50.5 0.0 1433.6 0.0 2 16.7 57 > da19 57.5 0.0 2415.8 0.0 3 20.4 70 > da20 52.5 222.0 2097.1 10613.0 5 12.8 100 > da21 52.5 256.7 1967.8 11498.5 3 10.6 100 > da22 37.7 433.1 1342.4 12880.1 4 5.5 99 > da23 42.6 359.8 2304.3 13073.8 5 7.2 101 > da24 33.7 0.0 1256.7 0.0 1 15.4 40 > da25 26.8 0.0 853.8 0.0 2 15.1 32 > da26 23.8 0.0 343.9 0.0 1 12.4 28 > da27 26.8 0.0 400.4 0.0 0 12.4 31 > da28 15.9 0.0 575.3 0.0 1 11.4 17 > da29 20.8 0.0 750.7 0.0 0 14.4 24 > da30 37.7 0.0 952.4 0.0 0 12.6 37 > da31 29.7 0.0 777.0 0.0 0 13.6 37 > da32 54.5 121.9 1824.6 6514.4 7 27.7 100 > da33 56.5 116.9 2017.3 6213.6 6 29.7 99 > da34 42.6 0.0 1303.3 0.0 1 14.9 43 > da35 45.6 0.0 1400.9 0.0 2 14.8 45 > > Some deviced have 0.0 kw/s for long period of time, > then others and so on and so on. > Here some more results: > > device r/s w/s kr/s kw/s qlen svc_t %b > md0 0.0 37.9 0.0 0.0 0 0.0 0 > da0 58.9 173.7 1983.5 4585.3 3 11.2 87 > da1 49.9 162.7 1656.2 4548.4 3 14.0 95 > da2 40.9 187.6 1476.5 3466.6 1 4.8 58 > da3 42.9 188.6 1646.7 3466.6 0 5.3 64 > da4 54.9 33.9 2222.6 1778.4 1 13.3 63 > da5 53.9 37.9 2429.6 1778.4 2 12.9 68 > da6 42.9 33.9 1445.1 444.6 0 10.3 45 > da7 40.9 28.9 2045.9 444.6 0 12.3 43 > da8 53.9 0.0 959.6 0.0 1 22.7 62 > da9 29.9 0.0 665.2 0.0 1 52.1 64 > da10 52.9 83.8 1845.3 2084.8 2 8.2 64 > da11 44.9 103.8 1654.2 4895.2 1 8.8 71 > da12 50.9 60.9 1273.0 2078.3 1 10.3 69 > da13 39.9 57.9 940.1 2078.3 0 15.4 75 > da14 45.9 72.9 977.0 3178.6 0 8.5 63 > da15 48.9 72.9 1000.5 3178.6 0 9.6 72 > da16 42.9 74.9 1187.6 2118.8 1 6.7 51 > da17 48.9 82.8 1651.7 3013.0 0 5.7 52 > da18 67.9 78.8 2735.5 2456.1 0 11.5 75 > da19 52.9 79.8 2436.6 2456.1 0 13.1 82 > da20 48.9 91.8 2623.8 1682.6 1 7.2 60 > da21 52.9 92.8 1893.2 1682.6 0 7.1 61 > da22 67.9 20.0 2518.0 701.1 0 13.5 79 > da23 68.9 23.0 3331.8 701.1 1 13.6 77 > da24 45.9 17.0 2148.7 369.8 1 11.6 47 > da25 36.9 18.0 1747.5 369.8 1 12.6 46 > da26 46.9 1.0 1873.3 0.5 0 21.3 55 > da27 38.9 1.0 1395.7 0.5 0 34.6 58 > da28 34.9 9.0 1523.5 53.9 0 14.1 39 > da29 26.9 10.0 1124.8 53.9 1 13.8 28 > da30 44.9 0.0 1887.2 0.0 0 18.8 50 > da31 47.9 0.0 2273.0 0.0 0 20.2 49 > da32 65.9 90.8 2221.6 1730.5 3 9.7 77 > da33 79.8 90.8 3304.9 1730.5 1 9.9 88 > da34 75.8 134.7 3638.7 3938.1 2 10.2 90 > da35 49.9 209.6 1792.4 5756.0 2 8.1 85 > > > md0 0.0 19.0 0.0 0.0 0 0.0 0 > da0 38.0 194.8 1416.1 1175.8 1 10.6 100 > da1 40.0 190.8 1424.6 1072.9 2 10.4 100 > da2 37.0 0.0 1562.4 0.0 0 14.9 40 > da3 31.0 0.0 1169.8 0.0 0 14.0 33 > da4 44.0 0.0 2632.4 0.0 0 18.0 45 > da5 41.0 0.0 1944.6 0.0 0 19.0 45 > da6 38.0 0.0 1786.2 0.0 1 18.4 44 > da7 45.0 0.0 2275.7 0.0 0 16.0 48 > da8 80.9 0.0 4151.3 0.0 2 24.1 85 > da9 83.9 0.0 3256.2 0.0 3 21.2 83 > da10 61.9 0.0 3657.3 0.0 1 18.9 65 > da11 53.9 0.0 2532.5 0.0 1 18.7 56 > da12 54.9 0.0 2650.8 0.0 0 18.9 60 > da13 48.0 0.0 1975.5 0.0 0 19.6 53 > da14 43.0 0.0 1802.7 0.0 2 14.1 43 > da15 49.0 0.0 2455.5 0.0 0 14.0 48 > da16 45.0 0.0 1521.5 0.0 1 16.0 50 > da17 45.0 0.0 1650.8 0.0 4 13.7 47 > da18 48.0 0.0 1618.9 0.0 1 15.0 54 > da19 47.0 0.0 1982.0 0.0 0 16.5 55 > da20 52.9 0.0 2186.3 0.0 0 19.8 65 > da21 61.9 0.0 3020.5 0.0 0 16.3 61 > da22 70.9 0.0 3309.7 0.0 1 15.5 67 > da23 67.9 0.0 2742.3 0.0 2 16.5 73 > da24 38.0 0.0 1426.1 0.0 1 15.5 40 > da25 41.0 0.0 1905.6 0.0 1 14.0 39 > da26 43.0 0.0 2371.1 0.0 0 14.2 40 > da27 46.0 0.0 2178.3 0.0 0 15.2 45 > da28 44.0 0.0 2092.9 0.0 0 12.4 43 > da29 41.0 0.0 1442.1 0.0 1 13.4 37 > da30 42.0 37.0 1171.3 645.9 1 17.5 62 > da31 27.0 67.9 713.8 290.7 0 16.7 64 > da32 47.0 0.0 1043.5 0.0 0 13.3 43 > da33 50.0 0.0 1741.3 0.0 1 15.7 57 > da34 42.0 0.0 1119.9 0.0 0 18.2 55 > da35 45.0 0.0 1071.4 0.0 0 15.7 55 > > > First thing we did is tried to reboot. > It took system more than 5 minutes to import the pool (normally it's a fraction of a second). > Nedless to say reboot did not help a bit. > > What can we do about this problem? > > > System info: > FreeBSD 11.0-CURRENT #5 r260625 > > zpool get all disk1 > NAME PROPERTY VALUE SOURCE > disk1 size 16,3T - > disk1 capacity 59% - > disk1 altroot - default > disk1 health ONLINE - > disk1 guid 4909337477172007488 default > disk1 version - default > disk1 bootfs - default > disk1 delegation on default > disk1 autoreplace off default > disk1 cachefile - default > disk1 failmode wait default > disk1 listsnapshots off default > disk1 autoexpand off default > disk1 dedupditto 0 default > disk1 dedupratio 1.00x - > disk1 free 6,56T - > disk1 allocated 9,76T - > disk1 readonly off - > disk1 comment - default > disk1 expandsize 0 - > disk1 freeing 0 default > disk1 feature@async_destroy enabled local > disk1 feature@empty_bpobj active local > disk1 feature@lz4_compress active local > disk1 feature@multi_vdev_crash_dump enabled local > disk1 feature@spacemap_histogram active local > disk1 feature@enabled_txg active local > disk1 feature@hole_birth active local > disk1 feature@extensible_dataset enabled local > disk1 feature@bookmarks enabled local > > > > zfs get all disk1 > NAME PROPERTY VALUE SOURCE > disk1 type filesystem - > disk1 creation Wed Sep 18 11:47 2013 - > disk1 used 9,75T - > disk1 available 6,30T - > disk1 referenced 9,74T - > disk1 compressratio 1.63x - > disk1 mounted yes - > disk1 quota none default > disk1 reservation none default > disk1 recordsize 128K default > disk1 mountpoint /......... local > disk1 sharenfs off default > disk1 checksum on default > disk1 compression lz4 local > disk1 atime off local > disk1 devices on default > disk1 exec off local > disk1 setuid off local > disk1 readonly off default > disk1 jailed off default > disk1 snapdir hidden default > disk1 aclmode discard default > disk1 aclinherit restricted default > disk1 canmount on default > disk1 xattr off temporary > disk1 copies 1 default > disk1 version 5 - > disk1 utf8only off - > disk1 normalization none - > disk1 casesensitivity sensitive - > disk1 vscan off default > disk1 nbmand off default > disk1 sharesmb off default > disk1 refquota none default > disk1 refreservation none default > disk1 primarycache all default > disk1 secondarycache none local > disk1 usedbysnapshots 0 - > disk1 usedbydataset 9,74T - > disk1 usedbychildren 9,71G - > disk1 usedbyrefreservation 0 - > disk1 logbias latency default > disk1 dedup off default > disk1 mlslabel - > disk1 sync standard local > disk1 refcompressratio 1.63x - > disk1 written 9,74T - > disk1 logicalused 15,8T - > disk1 logicalreferenced 15,8T - > > > This is very severe, thanks. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 17:49:01 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 123F1FA7 for ; Mon, 1 Dec 2014 17:49:01 +0000 (UTC) Received: from mx1.internetx.com (mx1.internetx.com [62.116.129.39]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id BCDD9E8E for ; Mon, 1 Dec 2014 17:49:00 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mx1.internetx.com (Postfix) with ESMTP id B1BDC1472003 for ; Mon, 1 Dec 2014 18:48:56 +0100 (CET) X-Virus-Scanned: InterNetX GmbH amavisd-new at ix-mailer.internetx.de Received: from mx1.internetx.com ([62.116.129.39]) by localhost (ix-mailer.internetx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nxDlUtpMqpeH for ; Mon, 1 Dec 2014 18:48:54 +0100 (CET) Received: from [192.168.100.26] (pizza.internetx.de [62.116.129.3]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mx1.internetx.com (Postfix) with ESMTPSA id 94DD91472002 for ; Mon, 1 Dec 2014 18:48:54 +0100 (CET) Message-ID: <547CA9F5.7030300@internetx.com> Date: Mon, 01 Dec 2014 18:48:37 +0100 From: InterNetX - Juergen Gotteswinter Reply-To: juergen.gotteswinter@internetx.com Organization: InterNetX GmbH User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> <547CA5AA.8080105@multiplay.co.uk> In-Reply-To: <547CA5AA.8080105@multiplay.co.uk> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 17:49:01 -0000 looks kinda strange, too? could you add the output of zpool status -vv gstat -c could be helpful, too as well as good old smartctl -a Am 01.12.2014 um 18:30 schrieb Steven Hartland: > disk1 mountpoint > /......... local From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 17:56:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4BE155A0 for ; Mon, 1 Dec 2014 17:56:51 +0000 (UTC) Received: from fs.denninger.net (wsip-70-169-168-7.pn.at.cox.net [70.169.168.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "NewFS.denninger.net", Issuer "NewFS.denninger.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 06C4EF94 for ; Mon, 1 Dec 2014 17:56:50 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by fs.denninger.net (8.14.9/8.14.8) with ESMTP id sB1HuaS3034337 for ; Mon, 1 Dec 2014 11:56:39 -0600 (CST) (envelope-from karl@denninger.net) Received: from [127.0.0.1] (TLS/SSL) [208.54.90.242] by Spamblock-sys (LOCAL/AUTH); Mon Dec 1 11:56:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Mailer: BlackBerry Email (10.3.1.1154) Message-ID: <20141201175639.4309067.12451.1776@denninger.net> Date: Mon, 01 Dec 2014 12:56:39 -0500 Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) From: Karl Denninger In-Reply-To: <547CA9F5.7030300@internetx.com> References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> <547CA5AA.8080105@multiplay.co.uk> <547CA9F5.7030300@internetx.com> To: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 17:56:51 -0000 =E2=80=8EYou may have one or more drives taking seek errors and retrying; I= 've seen similar behavior. Smartctl *might* disclose this -- the bad news i= s that some drives do not log these events! --=C2=A0Karl (On=C2=A0Passport=C2=A0PDA) =C2=A0 Original Message =C2=A0 From: InterNetX - Juergen Gotteswinter Sent: Monday, December 1, 2014 12:49 To: freebsd-fs@freebsd.org Reply To: juergen.gotteswinter@internetx.com Subject: Re: Irregular disk IO and poor performance (possibly after reading= a lot of data from pool) looks kinda strange, too? could you add the output of zpool status -vv gstat -c could be helpful, too as well as good old smartctl -a Am 01.12.2014 um 18:30 schrieb Steven Hartland: > disk1 mountpoint=20 > /......... local _______________________________________________ freebsd-fs@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" %SPAMBLOCK-SYS: Matched [@freebsd.org+], message ok From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 20:25:28 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 69D16F4A for ; Mon, 1 Dec 2014 20:25:28 +0000 (UTC) Received: from mwork.nabble.com (mwork.nabble.com [162.253.133.43]) by mx1.freebsd.org (Postfix) with ESMTP id 44EA91F3 for ; Mon, 1 Dec 2014 20:25:27 +0000 (UTC) Received: from msam.nabble.com (unknown [162.253.133.85]) by mwork.nabble.com (Postfix) with ESMTP id 21C04BCD73F for ; Mon, 1 Dec 2014 07:36:21 -0800 (PST) Date: Mon, 1 Dec 2014 08:36:20 -0700 (MST) From: Dmitriy Makarov To: freebsd-fs@freebsd.org Message-ID: <1417448180496-5969924.post@n5.nabble.com> In-Reply-To: <547C6F59.9090106@internetx.com> References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> <547C6F59.9090106@internetx.com> Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 20:25:28 -0000 InterNetX - Juergen Gotteswinter-2 wrote > one/more disks broken or tend to die soon? Nope, at least does not seem like that. Smarts are OK for all drives: OK - HDD S.M.A.R.T health: src=0, rsc=0, rec=0, cps=0, ou=0, HEALTH_STATUS=PASSED Drives are not so old either ~1year. I doubt it has anything to do with the drives for two reasons: * paired devices in the mirror behave pretty much exactly * different pairs have 0 wtire I/O at different points of time and it's not like seconds, it is a matter of minutes. It seems as if ZFS decides to ignore some mirror devices for unknown reason. Randomly. Sometimes just 2-3 of 18 are idle. Sometimes it's 17 of 18! Nedless to say in that moment, that one poor mirror is doing all the hard work by itself. At which point system almost freezes, all I/O bound processes are blocked waiting for disk. -- View this message in context: http://freebsd.1045724.n5.nabble.com/Irregular-disk-IO-and-poor-performance-possibly-after-reading-a-lot-of-data-from-pool-tp5969896p5969924.html Sent from the freebsd-fs mailing list archive at Nabble.com. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 21:05:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 6D5D1D19 for ; Mon, 1 Dec 2014 21:05:59 +0000 (UTC) Received: from mwork.nabble.com (mwork.nabble.com [162.253.133.43]) by mx1.freebsd.org (Postfix) with ESMTP id 5692A8B9 for ; Mon, 1 Dec 2014 21:05:59 +0000 (UTC) Received: from msam.nabble.com (unknown [162.253.133.85]) by mwork.nabble.com (Postfix) with ESMTP id 71B14BD6DA3 for ; Mon, 1 Dec 2014 13:05:59 -0800 (PST) Date: Mon, 1 Dec 2014 14:05:58 -0700 (MST) From: Dmitriy Makarov To: freebsd-fs@freebsd.org Message-ID: <1417467958419-5969998.post@n5.nabble.com> In-Reply-To: <547CA5AA.8080105@multiplay.co.uk> References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> <547CA5AA.8080105@multiplay.co.uk> Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 21:05:59 -0000 Western Digital RE4 series WD1003FBYX Steven Hartland wrote > What disks? -- View this message in context: http://freebsd.1045724.n5.nabble.com/Irregular-disk-IO-and-poor-performance-possibly-after-reading-a-lot-of-data-from-pool-tp5969896p5969998.html Sent from the freebsd-fs mailing list archive at Nabble.com. From owner-freebsd-fs@FreeBSD.ORG Mon Dec 1 22:49:50 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 62FCADB9 for ; Mon, 1 Dec 2014 22:49:50 +0000 (UTC) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D3444355 for ; Mon, 1 Dec 2014 22:49:49 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.5/8.14.5) with ESMTP id sB1M5dbx064347; Tue, 2 Dec 2014 01:05:39 +0300 (MSK) (envelope-from marck@rinet.ru) Date: Tue, 2 Dec 2014 01:05:39 +0300 (MSK) From: Dmitry Morozovsky To: Dmitriy Makarov Subject: Re: Irregular disk IO and poor performance (possibly after reading a lot of data from pool) In-Reply-To: <1417467958419-5969998.post@n5.nabble.com> Message-ID: References: <1417438604.143909513.k6b3b33f@frv41.fwdcdn.com> <547CA5AA.8080105@multiplay.co.uk> <1417467958419-5969998.post@n5.nabble.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (woozle.rinet.ru [0.0.0.0]); Tue, 02 Dec 2014 01:05:39 +0300 (MSK) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Dec 2014 22:49:50 -0000 On Mon, 1 Dec 2014, Dmitriy Makarov wrote: > Steven Hartland wrote > > What disks? > Western Digital RE4 series > WD1003FBYX Maybe controller' issues involved as well. ``zpool status'' and appropriate portions of /var/run/dmesg.boot could be useful. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------ From owner-freebsd-fs@FreeBSD.ORG Tue Dec 2 14:10:54 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 9B48AC21 for ; Tue, 2 Dec 2014 14:10:54 +0000 (UTC) Received: from kerio.tuxis.nl (alcyone.saas.tuxis.net [31.3.111.19]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B017DDA3 for ; Tue, 2 Dec 2014 14:10:52 +0000 (UTC) X-Footer: dHV4aXMubmw= Received: from [31.3.104.222] ([31.3.104.222]) by kerio.tuxis.nl (Kerio Connect 8.4.0) for freebsd-fs@freebsd.org; Tue, 2 Dec 2014 14:40:39 +0100 Date: Tue, 2 Dec 2014 14:40:39 +0100 Subject: 'Read only' (L2)Arc? X-Mailer: Kerio Connect 8.4.0/Kerio Connect client X-User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36 Message-ID: <188117622-17103@kerio.tuxis.nl> X-Priority: 3 Importance: Normal MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 From: Mark Schouten To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg="sha1"; boundary="=-0vwMvfg+lVRiRM3j82Lz" X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Dec 2014 14:10:54 -0000 --=-0vwMvfg+lVRiRM3j82Lz Content-Type: multipart/mixed; boundary="=-pGdjF7uNB2Iur/NfB2Hr" --=-pGdjF7uNB2Iur/NfB2Hr Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi, Something fishy seems to be going on on my storage box. It runs on 10.1-REL= EASE since about two weeks. Before, I was used to memory being fully used b= y ARC (as it should). But since the upgrade I see a different pattern. The = ARC first filled up upto 45GB, than reduced itselve to 8GB, the minimum. Ha= d been there since one week. In the same time, =C2=A0the L2ARC did the same thing, and has been shrinkin= g since. I've got no clue why this is, I'm hoping you guys do. I have three attachme= nts: 1: Graph of L2ARC-Size 2: Graph of ARC-Size 3: Output of sysctl -a | grep zfs Thanks in advance.. Met vriendelijke groeten, --=C2=A0 Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ Mark Schouten | Tuxis Internet Engineering KvK:=C2=A061527076=C2=A0| http://www.tuxis.nl/ T: 0318 200208 | info@tuxis.nl= --=-pGdjF7uNB2Iur/NfB2Hr Content-Type: text/plain; name="sysctl.txt" Content-Disposition: attachment; filename="sysctl.txt" vfs.zfs.arc_max: 65782300672 vfs.zfs.arc_min: 8222787584 vfs.zfs.arc_average_blocksize: 8192 vfs.zfs.arc_meta_used: 1409694752 vfs.zfs.arc_meta_limit: 16445575168 vfs.zfs.l2arc_write_max: 268435456 vfs.zfs.l2arc_write_boost: 268435456 vfs.zfs.l2arc_headroom: 2 vfs.zfs.l2arc_feed_secs: 1 vfs.zfs.l2arc_feed_min_ms: 200 vfs.zfs.l2arc_noprefetch: 0 vfs.zfs.l2arc_feed_again: 1 vfs.zfs.l2arc_norw: 1 vfs.zfs.anon_size: 64661504 vfs.zfs.anon_metadata_lsize: 0 vfs.zfs.anon_data_lsize: 0 vfs.zfs.mru_size: 492498944 vfs.zfs.mru_metadata_lsize: 133989376 vfs.zfs.mru_data_lsize: 250498048 vfs.zfs.mru_ghost_size: 7699614208 vfs.zfs.mru_ghost_metadata_lsize: 1445447168 vfs.zfs.mru_ghost_data_lsize: 6254167040 vfs.zfs.mfu_size: 283578880 vfs.zfs.mfu_metadata_lsize: 211456 vfs.zfs.mfu_data_lsize: 49172480 vfs.zfs.mfu_ghost_size: 522598912 vfs.zfs.mfu_ghost_metadata_lsize: 285305856 vfs.zfs.mfu_ghost_data_lsize: 237293056 vfs.zfs.l2c_only_size: 146356476928 vfs.zfs.dedup.prefetch: 1 vfs.zfs.nopwrite_enabled: 1 vfs.zfs.mdcomp_disable: 0 vfs.zfs.dirty_data_max: 4294967296 vfs.zfs.dirty_data_max_max: 4294967296 vfs.zfs.dirty_data_max_percent: 10 vfs.zfs.dirty_data_sync: 67108864 vfs.zfs.delay_min_dirty_percent: 60 vfs.zfs.delay_scale: 500000 vfs.zfs.prefetch_disable: 0 vfs.zfs.zfetch.max_streams: 8 vfs.zfs.zfetch.min_sec_reap: 2 vfs.zfs.zfetch.block_cap: 256 vfs.zfs.zfetch.array_rd_sz: 1048576 vfs.zfs.top_maxinflight: 32 vfs.zfs.resilver_delay: 2 vfs.zfs.scrub_delay: 4 vfs.zfs.scan_idle: 50 vfs.zfs.scan_min_time_ms: 1000 vfs.zfs.free_min_time_ms: 1000 vfs.zfs.resilver_min_time_ms: 3000 vfs.zfs.no_scrub_io: 0 vfs.zfs.no_scrub_prefetch: 0 vfs.zfs.metaslab.gang_bang: 131073 vfs.zfs.metaslab.fragmentation_threshold: 70 vfs.zfs.metaslab.debug_load: 0 vfs.zfs.metaslab.debug_unload: 0 vfs.zfs.metaslab.df_alloc_threshold: 131072 vfs.zfs.metaslab.df_free_pct: 4 vfs.zfs.metaslab.min_alloc_size: 10485760 vfs.zfs.metaslab.load_pct: 50 vfs.zfs.metaslab.unload_delay: 8 vfs.zfs.metaslab.preload_limit: 3 vfs.zfs.metaslab.preload_enabled: 1 vfs.zfs.metaslab.fragmentation_factor_enabled: 1 vfs.zfs.metaslab.lba_weighting_enabled: 1 vfs.zfs.metaslab.bias_enabled: 1 vfs.zfs.condense_pct: 200 vfs.zfs.mg_noalloc_threshold: 0 vfs.zfs.mg_fragmentation_threshold: 85 vfs.zfs.check_hostid: 1 vfs.zfs.spa_load_verify_maxinflight: 10000 vfs.zfs.spa_load_verify_metadata: 1 vfs.zfs.spa_load_verify_data: 1 vfs.zfs.recover: 0 vfs.zfs.deadman_synctime_ms: 1000000 vfs.zfs.deadman_checktime_ms: 5000 vfs.zfs.deadman_enabled: 1 vfs.zfs.spa_asize_inflation: 24 vfs.zfs.txg.timeout: 5 vfs.zfs.vdev.cache.max: 16384 vfs.zfs.vdev.cache.size: 0 vfs.zfs.vdev.cache.bshift: 16 vfs.zfs.vdev.trim_on_init: 1 vfs.zfs.vdev.mirror.rotating_inc: 0 vfs.zfs.vdev.mirror.rotating_seek_inc: 5 vfs.zfs.vdev.mirror.rotating_seek_offset: 1048576 vfs.zfs.vdev.mirror.non_rotating_inc: 0 vfs.zfs.vdev.mirror.non_rotating_seek_inc: 1 vfs.zfs.vdev.max_active: 1000 vfs.zfs.vdev.sync_read_min_active: 10 vfs.zfs.vdev.sync_read_max_active: 10 vfs.zfs.vdev.sync_write_min_active: 10 vfs.zfs.vdev.sync_write_max_active: 10 vfs.zfs.vdev.async_read_min_active: 1 vfs.zfs.vdev.async_read_max_active: 3 vfs.zfs.vdev.async_write_min_active: 1 vfs.zfs.vdev.async_write_max_active: 10 vfs.zfs.vdev.scrub_min_active: 1 vfs.zfs.vdev.scrub_max_active: 2 vfs.zfs.vdev.trim_min_active: 1 vfs.zfs.vdev.trim_max_active: 64 vfs.zfs.vdev.aggregation_limit: 131072 vfs.zfs.vdev.read_gap_limit: 32768 vfs.zfs.vdev.write_gap_limit: 4096 vfs.zfs.vdev.bio_flush_disable: 0 vfs.zfs.vdev.bio_delete_disable: 0 vfs.zfs.vdev.trim_max_bytes: 2147483648 vfs.zfs.vdev.trim_max_pending: 64 vfs.zfs.max_auto_ashift: 13 vfs.zfs.min_auto_ashift: 9 vfs.zfs.zil_replay_disable: 0 vfs.zfs.cache_flush_disable: 0 vfs.zfs.zio.use_uma: 1 vfs.zfs.zio.exclude_metadata: 0 vfs.zfs.sync_pass_deferred_free: 2 vfs.zfs.sync_pass_dont_compress: 5 vfs.zfs.sync_pass_rewrite: 2 vfs.zfs.snapshot_list_prefetch: 0 vfs.zfs.super_owner: 0 vfs.zfs.debug: 0 vfs.zfs.version.ioctl: 4 vfs.zfs.version.acl: 1 vfs.zfs.version.spa: 5000 vfs.zfs.version.zpl: 5 vfs.zfs.vol.mode: 1 vfs.zfs.trim.enabled: 1 vfs.zfs.trim.txg_delay: 32 vfs.zfs.trim.timeout: 30 vfs.zfs.trim.max_interval: 1 debug.zfs_flags: 0 security.jail.mount_zfs_allowed: 0 security.jail.param.allow.mount.zfs: 0 kstat.zfs.misc.zio_trim.bytes: 28368633909248 kstat.zfs.misc.zio_trim.success: 10610317 kstat.zfs.misc.zio_trim.unsupported: 2244 kstat.zfs.misc.zio_trim.failed: 0 kstat.zfs.misc.xuio_stats.onloan_read_buf: 0 kstat.zfs.misc.xuio_stats.onloan_write_buf: 0 kstat.zfs.misc.xuio_stats.read_buf_copied: 0 kstat.zfs.misc.xuio_stats.read_buf_nocopy: 0 kstat.zfs.misc.xuio_stats.write_buf_copied: 0 kstat.zfs.misc.xuio_stats.write_buf_nocopy: 0 kstat.zfs.misc.zfetchstats.hits: 5650597782 kstat.zfs.misc.zfetchstats.misses: 849031569 kstat.zfs.misc.zfetchstats.colinear_hits: 769765 kstat.zfs.misc.zfetchstats.colinear_misses: 848261804 kstat.zfs.misc.zfetchstats.stride_hits: 5563183638 kstat.zfs.misc.zfetchstats.stride_misses: 2727991 kstat.zfs.misc.zfetchstats.reclaim_successes: 11376434 kstat.zfs.misc.zfetchstats.reclaim_failures: 836885370 kstat.zfs.misc.zfetchstats.streams_resets: 413558 kstat.zfs.misc.zfetchstats.streams_noresets: 87388981 kstat.zfs.misc.zfetchstats.bogus_streams: 0 kstat.zfs.misc.zcompstats.attempts: 269231227 kstat.zfs.misc.zcompstats.empty: 1104450 kstat.zfs.misc.zcompstats.skipped_insufficient_gain: 8491217 kstat.zfs.misc.arcstats.hits: 1666766580 kstat.zfs.misc.arcstats.misses: 140208837 kstat.zfs.misc.arcstats.demand_data_hits: 1072854567 kstat.zfs.misc.arcstats.demand_data_misses: 49203909 kstat.zfs.misc.arcstats.demand_metadata_hits: 324180068 kstat.zfs.misc.arcstats.demand_metadata_misses: 9682528 kstat.zfs.misc.arcstats.prefetch_data_hits: 89947524 kstat.zfs.misc.arcstats.prefetch_data_misses: 65500472 kstat.zfs.misc.arcstats.prefetch_metadata_hits: 179784421 kstat.zfs.misc.arcstats.prefetch_metadata_misses: 15821928 kstat.zfs.misc.arcstats.mru_hits: 204368154 kstat.zfs.misc.arcstats.mru_ghost_hits: 41243944 kstat.zfs.misc.arcstats.mfu_hits: 1220469772 kstat.zfs.misc.arcstats.mfu_ghost_hits: 21110088 kstat.zfs.misc.arcstats.allocated: 182120274 kstat.zfs.misc.arcstats.deleted: 80838028 kstat.zfs.misc.arcstats.stolen: 110356127 kstat.zfs.misc.arcstats.recycle_miss: 30441978 kstat.zfs.misc.arcstats.mutex_miss: 6648251 kstat.zfs.misc.arcstats.evict_skip: 1376465560 kstat.zfs.misc.arcstats.evict_l2_cached: 7222279177216 kstat.zfs.misc.arcstats.evict_l2_eligible: 5461995128832 kstat.zfs.misc.arcstats.evict_l2_ineligible: 2521983162368 kstat.zfs.misc.arcstats.hash_elements: 1741070 kstat.zfs.misc.arcstats.hash_elements_max: 5657099 kstat.zfs.misc.arcstats.hash_collisions: 79156362 kstat.zfs.misc.arcstats.hash_chains: 157646 kstat.zfs.misc.arcstats.hash_chain_max: 9 kstat.zfs.misc.arcstats.p: 556315264 kstat.zfs.misc.arcstats.c: 8222787584 kstat.zfs.misc.arcstats.c_min: 8222787584 kstat.zfs.misc.arcstats.c_max: 65782300672 kstat.zfs.misc.arcstats.size: 1773760728 kstat.zfs.misc.arcstats.hdr_size: 51633000 kstat.zfs.misc.arcstats.data_size: 840611328 kstat.zfs.misc.arcstats.other_size: 554611632 kstat.zfs.misc.arcstats.l2_hits: 19690249 kstat.zfs.misc.arcstats.l2_misses: 120518565 kstat.zfs.misc.arcstats.l2_feeds: 656530 kstat.zfs.misc.arcstats.l2_rw_clash: 12694 kstat.zfs.misc.arcstats.l2_read_bytes: 827995341312 kstat.zfs.misc.arcstats.l2_write_bytes: 1680951642112 kstat.zfs.misc.arcstats.l2_writes_sent: 451965 kstat.zfs.misc.arcstats.l2_writes_done: 451965 kstat.zfs.misc.arcstats.l2_writes_error: 0 kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 3231 kstat.zfs.misc.arcstats.l2_evict_lock_retry: 2093 kstat.zfs.misc.arcstats.l2_evict_reading: 0 kstat.zfs.misc.arcstats.l2_free_on_write: 153566 kstat.zfs.misc.arcstats.l2_abort_lowmem: 486872 kstat.zfs.misc.arcstats.l2_cksum_bad: 0 kstat.zfs.misc.arcstats.l2_io_error: 0 kstat.zfs.misc.arcstats.l2_size: 148253359616 kstat.zfs.misc.arcstats.l2_asize: 91068383232 kstat.zfs.misc.arcstats.l2_hdr_size: 375335104 kstat.zfs.misc.arcstats.l2_compress_successes: 27450275 kstat.zfs.misc.arcstats.l2_compress_zeros: 0 kstat.zfs.misc.arcstats.l2_compress_failures: 5123211 kstat.zfs.misc.arcstats.l2_write_trylock_fail: 631401417 kstat.zfs.misc.arcstats.l2_write_passed_headroom: 9703416 kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 4021 kstat.zfs.misc.arcstats.l2_write_in_l2: 226140870690 kstat.zfs.misc.arcstats.l2_write_io_in_progress: 980 kstat.zfs.misc.arcstats.l2_write_not_cacheable: 24646969868 kstat.zfs.misc.arcstats.l2_write_full: 711 kstat.zfs.misc.arcstats.l2_write_buffer_iter: 656530 kstat.zfs.misc.arcstats.l2_write_pios: 451965 kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 14023487461520384 kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 42007617 kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 79792 kstat.zfs.misc.arcstats.memory_throttle_count: 0 kstat.zfs.misc.arcstats.duplicate_buffers: 0 kstat.zfs.misc.arcstats.duplicate_buffers_size: 0 kstat.zfs.misc.arcstats.duplicate_reads: 0 kstat.zfs.misc.vdev_cache_stats.delegations: 0 kstat.zfs.misc.vdev_cache_stats.hits: 0 kstat.zfs.misc.vdev_cache_stats.misses: 0 --=-pGdjF7uNB2Iur/NfB2Hr-- --=-0vwMvfg+lVRiRM3j82Lz Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Disposition: attachment; filename="smime.p7s" Content-Description: Electronic Signature S/MIME Content-Transfer-Encoding: base64 MIIRwQYJKoZIhvcNAQcCoIIRsjCCEa4CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHAaCCDt4w ggUbMIIEA6ADAgECAhAsv+VdGX6YsSHI/WRu2j2JMA0GCSqGSIb3DQEBBQUAMIGTMQswCQYDVQQG EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYD VQQKExFDT01PRE8gQ0EgTGltaXRlZDE5MDcGA1UEAxMwQ09NT0RPIENsaWVudCBBdXRoZW50aWNh dGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTE0MDcwMTAwMDAwMFoXDTE1MDcwMTIzNTk1OVow HjEcMBoGCSqGSIb3DQEJARYNbWFya0B0dXhpcy5ubDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC AQoCggEBANHu3SxlMZOG5GA0/mqtRXR1QmWwhUXzmCIprI0IPtSBWSA31YBJ5qcmXRhLzaiTB3Fr UpIGkW5aAZnDms9DD64kasF3oZE00Fvfnj/BDGbw098px1PukKfg4hasbTaELAjQTSUj8xRSHzKk VVynvLA/YmyRT/+u3ueK4wdaxcej241xH6mNfZeiKMAvbkv6Tm9vdup0BtqqbRSKcnc01KKrspun Eh73jLIUhP21uJv8vuOTxS1I9zJSlhIcMCEapjBQ+26cQl+s+qBuAs/LP3UPVytSbxvicdhDxtqH npN2h3jJ/+J86zGfQi8bG7EamsULTGHkbgIJL9AdKZRgAKECAwEAAaOCAd0wggHZMB8GA1UdIwQY MBaAFHoTTgB0W8Z4Y2QnwS/ioFu8ecV7MB0GA1UdDgQWBBR+dMZE22X/SMCyTxYzIP+NMZBNXTAO BgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAgBgNVHSUEGTAXBggrBgEFBQcDBAYLKwYBBAGy MQEDBQIwEQYJYIZIAYb4QgEBBAQDAgUgMEYGA1UdIAQ/MD0wOwYMKwYBBAGyMQECAQEBMCswKQYI KwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5uZXQvQ1BTMFcGA1UdHwRQME4wTKBKoEiG Rmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0NPTU9ET0NsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2Vj dXJlRW1haWxDQS5jcmwwgYgGCCsGAQUFBwEBBHwwejBSBggrBgEFBQcwAoZGaHR0cDovL2NydC5j b21vZG9jYS5jb20vQ09NT0RPQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNy dDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29tb2RvY2EuY29tMBgGA1UdEQQRMA+BDW1hcmtA dHV4aXMubmwwDQYJKoZIhvcNAQEFBQADggEBAIB8FhqaML1EzfvgNwwHDC3k0ICeMerOncgee6uJ KLxwU2mstttX5jtAmgK9RuDOu+TrMkkpF2yxYMTPpSM8nL7r+N/kdogu5Bustol8WTsW1e5vs+Nh hJYFORk113ouur1kSjXuHF8TWy+/PjFJBS/xm/H+/fkghppRU+4Dj2IReUBvlexAPYr4VDxjV7AD xPOXqTQkP15LWGvhTz2YVbJ3IAVOyUNkRhr9QwzToUxXa9k/QAOpXMuvS74AT2RBV/YCEEx7ebRD MAR6lZcbYiV8sXv1ASbnMdO3Fh2F98g+5rJn5PfFH8qLpapsZx0I2/axtSG09QMDJqXd3Ab6NpEw ggUaMIIEAqADAgECAhBtGeqnGU9qMyLmIjJ6qnHeMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYDVQQG EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUg VVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQG A1UEAxMtVVROLVVTRVJGaXJzdC1DbGllbnQgQXV0aGVudGljYXRpb24gYW5kIEVtYWlsMB4XDTEx MDQyODAwMDAwMFoXDTIwMDUzMDEwNDgzOFowgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVh dGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1p dGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1h aWwgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCShIRbS1eY1F4vi6ThQMijU1hf ZmXxMk73nzJ9VdB4TFW3QpTg+SdxB8XGaaS5MsTxQBqQzCdWYn8XtXFpruUgG+TLY15gyqJB9mrh o/+43x9IbWVDjCouK2M4d9+xF6zC2oIC1tQyatRnbyATj1w1+uVUgK/YcQodNwoCUFNslR2pEBS0 mZVZEjH/CaLSTNxS297iQAFbSGjdxUq04O0kHzqvcV8H46y/FDuwJXFoPfQP1hdYRhWBPGiLi4MP bXohV+Y0sNsyfuNK4aVScmQmkU6lkg//4LFg/RpvaFGZY40ai6XMQpubfSJj06mg/M6ekN9EGfRc WzW6FvOnm//BAgMBAAGjggFLMIIBRzAfBgNVHSMEGDAWgBSJgmd9xJ0mcABLtFBIfN49rgRufTAd BgNVHQ4EFgQUehNOAHRbxnhjZCfBL+KgW7x5xXswDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQI MAYBAf8CAQAwEQYDVR0gBAowCDAGBgRVHSAAMFgGA1UdHwRRME8wTaBLoEmGR2h0dHA6Ly9jcmwu dXNlcnRydXN0LmNvbS9VVE4tVVNFUkZpcnN0LUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kRW1haWwu Y3JsMHQGCCsGAQUFBwEBBGgwZjA9BggrBgEFBQcwAoYxaHR0cDovL2NydC51c2VydHJ1c3QuY29t L1VUTkFkZFRydXN0Q2xpZW50X0NBLmNydDAlBggrBgEFBQcwAYYZaHR0cDovL29jc3AudXNlcnRy dXN0LmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAhda+eFdVbTN/RFL+QtUGqAEDgIr7DbL9Sr/2r0FJ 9RtaxdKtG3NuPukmfOZMmMEwKN/L+0I8oSU+CnXW0D05hmbRoZu1TZtvryhsHa/l6nRaqNqxwPF1 ei+eupN5yv7ikR5WdLL4jdPgQ3Ib7Y/9YDkgR/uLrzplSDyYPaUlv73vYOBJ5RbI6z9Dg/Dg7g3B 080zX5vQvWBqszv++tTJOjwf7Zv/m0kzvkIpOYPuM2kugp1FTahp2oAbHj3SGl18R5mlmwhtEpmG 1l1XBxunML5LSUS4kH7K0Xk467Qz+qA6XSZYnmFVGLQh1ZnV4ENAQjC+6qXnlNKw/vN1+X9u5zCC BJ0wggOFoAMCAQICEDQ96SusJzT/j8s0lPvMcFQwDQYJKoZIhvcNAQEFBQAwbzELMAkGA1UEBhMC U0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5hbCBUVFAg TmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9vdDAeFw0wNTA2MDcwODA5 MTBaFw0yMDA1MzAxMDQ4MzhaMIGuMQswCQYDVQQGEwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcT DlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsT GGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQGA1UEAxMtVVROLVVTRVJGaXJzdC1DbGllbnQg QXV0aGVudGljYXRpb24gYW5kIEVtYWlsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA sjmFpPJ9q0E7YkY3rs3BYHW8OWX5ShpHornMSMxqmNVNNRm5pELlzkniii8efNIxB8dOtINknS4p 1aJkxIW9hVE1eaROaJB7HHqkkqgX8pgV8pPMyaQylbsMTzC9mKALi+VuG6JG+ni8om+rWV6lL8/K 2m2qL+usobNqqrcuZzWLeeEeaYji5kbNoKXqvgvOdjp6Dpvq/NonWz1zHyLmSGHGTPNpsaguG7bU MSAsvIKKjqQOpdeJQ/wWWq8dcdcRWdq6hw2v+vPhwvCkxWeM1tZUOt4KpLoDd7NlyP0e03RiqhjK aJMeoYV+9Udly/hNVyh00jT/MLbu9mIwFIws6wIDAQABo4H0MIHxMB8GA1UdIwQYMBaAFK29mHo0 tCb3+sQmVO8DveAky1QaMB0GA1UdDgQWBBSJgmd9xJ0mcABLtFBIfN49rgRufTAOBgNVHQ8BAf8E BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zARBgNVHSAECjAIMAYGBFUdIAAwRAYDVR0fBD0wOzA5oDeg NYYzaHR0cDovL2NybC51c2VydHJ1c3QuY29tL0FkZFRydXN0RXh0ZXJuYWxDQVJvb3QuY3JsMDUG CCsGAQUFBwEBBCkwJzAlBggrBgEFBQcwAYYZaHR0cDovL29jc3AudXNlcnRydXN0LmNvbTANBgkq hkiG9w0BAQUFAAOCAQEAAbyc42MosPMxAcLfe91ioAGdIzEPnJJzU1HqH0z61p/Eyi9nfngzD3QW uZGHkfWKJvpkcADYHvkLBGJQh5OB1Nr1I9s0u4VWtHA0bniDNx6FHMURFZJfhxe9rGr98cLRzIlf sXzwPlHyNfN87GCYazor4O/fs32G67Ub9VvsonyYE9cAULnRLXPeA3h04QWFMV7LmrmdlMa5lDd1 ctxE+2fo8PolHlKn2iXpR+CgxzygTrEKNvt3SJ/vl4r7tP7jlBSog7xcLT/SYHFg7sJxggzpiDbj 2iC0o6BsqpZLuICOdcpJB/Y7FLrf3AXZn9vgsuZNoHgm5+ctbn9fxh6IFTGCAqswggKnAgEBMIGo MIGTMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdT YWxmb3JkMRowGAYDVQQKExFDT01PRE8gQ0EgTGltaXRlZDE5MDcGA1UEAxMwQ09NT0RPIENsaWVu dCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhAsv+VdGX6YsSHI/WRu2j2JMAkG BSsOAwIaBQCggdgwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQx MjAyMTM0MDM5WjAjBgkqhkiG9w0BCQQxFgQUTfzuArgGMRjqji16mOG7+L0mnQ4weQYJKoZIhvcN AQkPMWwwajALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0D BzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgw DQYJKoZIhvcNAQEBBQAEggEAm2nhfLpATknDwExi/j1QqlkGU2Y2+/xX2Y7DrWXj+6v4NgGIc1uh M8/yiluEdzEB16Qp3NscWbuCshusXi8twKGDJ4jqmmiZdhFr7BDnVhMDzM531KChw9qMZMpG5iTh BVgqEwfBCskQRsz5wioiqfVfUl5vCSjk1FUHRcZWvtpO9e6Smn4caWBaG+Sqmf+IrM5z/OQi4gwi O5xvAdAoJSG/rPusoPCXEgpiRWQu2o+IgIYUjcVi/jAG1v032PahzOxqFhQj/4DsqihMoHpVvVhV yagxqbIIiEp2NcRtPRGHeC1WZt2rj8gZaWPj1rO/g0KlE0+ekPXofLYQweFpXw== --=-0vwMvfg+lVRiRM3j82Lz-- From owner-freebsd-fs@FreeBSD.ORG Tue Dec 2 21:25:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 0A51929B for ; Tue, 2 Dec 2014 21:25:11 +0000 (UTC) Received: from kerio.tuxis.nl (alcyone.saas.tuxis.net [31.3.111.19]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7DA70608 for ; Tue, 2 Dec 2014 21:25:09 +0000 (UTC) X-Footer: dHV4aXMubmw= Received: from [87.212.163.171] ([87.212.163.171]) by kerio.tuxis.nl (Kerio Connect 8.4.0) for freebsd-fs@freebsd.org; Tue, 2 Dec 2014 22:25:05 +0100 Date: Tue, 2 Dec 2014 22:25:05 +0100 Subject: Re: 'Read only' (L2)Arc? X-Mailer: Kerio Connect 8.4.0/Kerio Connect client X-User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.65 Safari/537.36 Message-ID: <215276906-618@kerio.tuxis.nl> X-Priority: 3 Importance: Normal In-Reply-To: <188117622-17103@kerio.tuxis.nl> MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 MIME-Version: 1.0 From: Mark Schouten To: freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg="sha1"; boundary="=-deQTP1Hd9ILuP1KfNbXA" X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Dec 2014 21:25:11 -0000 --=-deQTP1Hd9ILuP1KfNbXA Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hi, Mark Schouten , 2-12-2014 15:11: In the same time, =C2=A0the L2ARC did the same thing, and has been shrinkin= g since.=20 =20 =20 I've got no clue why this is, I'm hoping you guys do. I have three attachme= nts:=20 So I've been looking some more into this issue. I noticed that 'Wired' memo= ry is high (53G), so I started wondering what used that memory. I stumbled = upon this thread:=C2=A0http://lists.freebsd.org/pipermail/freebsd-current/2= 014-January/047706.html So I had a look at `vmstat -z` and saw that=C2=A0zio_data_buf_131072 is usi= ng about 46GB.=C2=A0 While writing this, I also noticed=C2=A0https://svnweb.freebsd.org/base?vie= w=3Drevision&revision=3D272875 which might be related? Met vriendelijke groeten, --=C2=A0 Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ Mark Schouten | Tuxis Internet Engineering KvK:=C2=A061527076=C2=A0| http://www.tuxis.nl/ T: 0318 200208 | info@tuxis.nl= --=-deQTP1Hd9ILuP1KfNbXA Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Disposition: attachment; filename="smime.p7s" Content-Description: Electronic Signature S/MIME Content-Transfer-Encoding: base64 MIIRwQYJKoZIhvcNAQcCoIIRsjCCEa4CAQExCzAJBgUrDgMCGgUAMAsGCSqGSIb3DQEHAaCCDt4w ggUbMIIEA6ADAgECAhAsv+VdGX6YsSHI/WRu2j2JMA0GCSqGSIb3DQEBBQUAMIGTMQswCQYDVQQG EwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYD VQQKExFDT01PRE8gQ0EgTGltaXRlZDE5MDcGA1UEAxMwQ09NT0RPIENsaWVudCBBdXRoZW50aWNh dGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBMB4XDTE0MDcwMTAwMDAwMFoXDTE1MDcwMTIzNTk1OVow HjEcMBoGCSqGSIb3DQEJARYNbWFya0B0dXhpcy5ubDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC AQoCggEBANHu3SxlMZOG5GA0/mqtRXR1QmWwhUXzmCIprI0IPtSBWSA31YBJ5qcmXRhLzaiTB3Fr UpIGkW5aAZnDms9DD64kasF3oZE00Fvfnj/BDGbw098px1PukKfg4hasbTaELAjQTSUj8xRSHzKk VVynvLA/YmyRT/+u3ueK4wdaxcej241xH6mNfZeiKMAvbkv6Tm9vdup0BtqqbRSKcnc01KKrspun Eh73jLIUhP21uJv8vuOTxS1I9zJSlhIcMCEapjBQ+26cQl+s+qBuAs/LP3UPVytSbxvicdhDxtqH npN2h3jJ/+J86zGfQi8bG7EamsULTGHkbgIJL9AdKZRgAKECAwEAAaOCAd0wggHZMB8GA1UdIwQY MBaAFHoTTgB0W8Z4Y2QnwS/ioFu8ecV7MB0GA1UdDgQWBBR+dMZE22X/SMCyTxYzIP+NMZBNXTAO BgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAgBgNVHSUEGTAXBggrBgEFBQcDBAYLKwYBBAGy MQEDBQIwEQYJYIZIAYb4QgEBBAQDAgUgMEYGA1UdIAQ/MD0wOwYMKwYBBAGyMQECAQEBMCswKQYI KwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5uZXQvQ1BTMFcGA1UdHwRQME4wTKBKoEiG Rmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0NPTU9ET0NsaWVudEF1dGhlbnRpY2F0aW9uYW5kU2Vj dXJlRW1haWxDQS5jcmwwgYgGCCsGAQUFBwEBBHwwejBSBggrBgEFBQcwAoZGaHR0cDovL2NydC5j b21vZG9jYS5jb20vQ09NT0RPQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNy dDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29tb2RvY2EuY29tMBgGA1UdEQQRMA+BDW1hcmtA dHV4aXMubmwwDQYJKoZIhvcNAQEFBQADggEBAIB8FhqaML1EzfvgNwwHDC3k0ICeMerOncgee6uJ KLxwU2mstttX5jtAmgK9RuDOu+TrMkkpF2yxYMTPpSM8nL7r+N/kdogu5Bustol8WTsW1e5vs+Nh hJYFORk113ouur1kSjXuHF8TWy+/PjFJBS/xm/H+/fkghppRU+4Dj2IReUBvlexAPYr4VDxjV7AD xPOXqTQkP15LWGvhTz2YVbJ3IAVOyUNkRhr9QwzToUxXa9k/QAOpXMuvS74AT2RBV/YCEEx7ebRD MAR6lZcbYiV8sXv1ASbnMdO3Fh2F98g+5rJn5PfFH8qLpapsZx0I2/axtSG09QMDJqXd3Ab6NpEw ggUaMIIEAqADAgECAhBtGeqnGU9qMyLmIjJ6qnHeMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYDVQQG EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUg VVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQG A1UEAxMtVVROLVVTRVJGaXJzdC1DbGllbnQgQXV0aGVudGljYXRpb24gYW5kIEVtYWlsMB4XDTEx MDQyODAwMDAwMFoXDTIwMDUzMDEwNDgzOFowgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVh dGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1p dGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1h aWwgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCShIRbS1eY1F4vi6ThQMijU1hf ZmXxMk73nzJ9VdB4TFW3QpTg+SdxB8XGaaS5MsTxQBqQzCdWYn8XtXFpruUgG+TLY15gyqJB9mrh o/+43x9IbWVDjCouK2M4d9+xF6zC2oIC1tQyatRnbyATj1w1+uVUgK/YcQodNwoCUFNslR2pEBS0 mZVZEjH/CaLSTNxS297iQAFbSGjdxUq04O0kHzqvcV8H46y/FDuwJXFoPfQP1hdYRhWBPGiLi4MP bXohV+Y0sNsyfuNK4aVScmQmkU6lkg//4LFg/RpvaFGZY40ai6XMQpubfSJj06mg/M6ekN9EGfRc WzW6FvOnm//BAgMBAAGjggFLMIIBRzAfBgNVHSMEGDAWgBSJgmd9xJ0mcABLtFBIfN49rgRufTAd BgNVHQ4EFgQUehNOAHRbxnhjZCfBL+KgW7x5xXswDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQI MAYBAf8CAQAwEQYDVR0gBAowCDAGBgRVHSAAMFgGA1UdHwRRME8wTaBLoEmGR2h0dHA6Ly9jcmwu dXNlcnRydXN0LmNvbS9VVE4tVVNFUkZpcnN0LUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kRW1haWwu Y3JsMHQGCCsGAQUFBwEBBGgwZjA9BggrBgEFBQcwAoYxaHR0cDovL2NydC51c2VydHJ1c3QuY29t L1VUTkFkZFRydXN0Q2xpZW50X0NBLmNydDAlBggrBgEFBQcwAYYZaHR0cDovL29jc3AudXNlcnRy dXN0LmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAhda+eFdVbTN/RFL+QtUGqAEDgIr7DbL9Sr/2r0FJ 9RtaxdKtG3NuPukmfOZMmMEwKN/L+0I8oSU+CnXW0D05hmbRoZu1TZtvryhsHa/l6nRaqNqxwPF1 ei+eupN5yv7ikR5WdLL4jdPgQ3Ib7Y/9YDkgR/uLrzplSDyYPaUlv73vYOBJ5RbI6z9Dg/Dg7g3B 080zX5vQvWBqszv++tTJOjwf7Zv/m0kzvkIpOYPuM2kugp1FTahp2oAbHj3SGl18R5mlmwhtEpmG 1l1XBxunML5LSUS4kH7K0Xk467Qz+qA6XSZYnmFVGLQh1ZnV4ENAQjC+6qXnlNKw/vN1+X9u5zCC BJ0wggOFoAMCAQICEDQ96SusJzT/j8s0lPvMcFQwDQYJKoZIhvcNAQEFBQAwbzELMAkGA1UEBhMC U0UxFDASBgNVBAoTC0FkZFRydXN0IEFCMSYwJAYDVQQLEx1BZGRUcnVzdCBFeHRlcm5hbCBUVFAg TmV0d29yazEiMCAGA1UEAxMZQWRkVHJ1c3QgRXh0ZXJuYWwgQ0EgUm9vdDAeFw0wNTA2MDcwODA5 MTBaFw0yMDA1MzAxMDQ4MzhaMIGuMQswCQYDVQQGEwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcT DlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsT GGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQGA1UEAxMtVVROLVVTRVJGaXJzdC1DbGllbnQg QXV0aGVudGljYXRpb24gYW5kIEVtYWlsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA sjmFpPJ9q0E7YkY3rs3BYHW8OWX5ShpHornMSMxqmNVNNRm5pELlzkniii8efNIxB8dOtINknS4p 1aJkxIW9hVE1eaROaJB7HHqkkqgX8pgV8pPMyaQylbsMTzC9mKALi+VuG6JG+ni8om+rWV6lL8/K 2m2qL+usobNqqrcuZzWLeeEeaYji5kbNoKXqvgvOdjp6Dpvq/NonWz1zHyLmSGHGTPNpsaguG7bU MSAsvIKKjqQOpdeJQ/wWWq8dcdcRWdq6hw2v+vPhwvCkxWeM1tZUOt4KpLoDd7NlyP0e03RiqhjK aJMeoYV+9Udly/hNVyh00jT/MLbu9mIwFIws6wIDAQABo4H0MIHxMB8GA1UdIwQYMBaAFK29mHo0 tCb3+sQmVO8DveAky1QaMB0GA1UdDgQWBBSJgmd9xJ0mcABLtFBIfN49rgRufTAOBgNVHQ8BAf8E BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zARBgNVHSAECjAIMAYGBFUdIAAwRAYDVR0fBD0wOzA5oDeg NYYzaHR0cDovL2NybC51c2VydHJ1c3QuY29tL0FkZFRydXN0RXh0ZXJuYWxDQVJvb3QuY3JsMDUG CCsGAQUFBwEBBCkwJzAlBggrBgEFBQcwAYYZaHR0cDovL29jc3AudXNlcnRydXN0LmNvbTANBgkq hkiG9w0BAQUFAAOCAQEAAbyc42MosPMxAcLfe91ioAGdIzEPnJJzU1HqH0z61p/Eyi9nfngzD3QW uZGHkfWKJvpkcADYHvkLBGJQh5OB1Nr1I9s0u4VWtHA0bniDNx6FHMURFZJfhxe9rGr98cLRzIlf sXzwPlHyNfN87GCYazor4O/fs32G67Ub9VvsonyYE9cAULnRLXPeA3h04QWFMV7LmrmdlMa5lDd1 ctxE+2fo8PolHlKn2iXpR+CgxzygTrEKNvt3SJ/vl4r7tP7jlBSog7xcLT/SYHFg7sJxggzpiDbj 2iC0o6BsqpZLuICOdcpJB/Y7FLrf3AXZn9vgsuZNoHgm5+ctbn9fxh6IFTGCAqswggKnAgEBMIGo MIGTMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdT YWxmb3JkMRowGAYDVQQKExFDT01PRE8gQ0EgTGltaXRlZDE5MDcGA1UEAxMwQ09NT0RPIENsaWVu dCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhAsv+VdGX6YsSHI/WRu2j2JMAkG BSsOAwIaBQCggdgwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQx MjAyMjEyNTA1WjAjBgkqhkiG9w0BCQQxFgQUYPN4AfAjZ7jE3EpOFEJT7KXeXSoweQYJKoZIhvcN AQkPMWwwajALBglghkgBZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0D BzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgw DQYJKoZIhvcNAQEBBQAEggEAa8J9MmPMALqgIhfhVHQmn9xyU+IWOudBvQGZP2wEX2GtIOya47jv FK6p5PmBL/pFT3qERZF7qVg9jiS+PNNU0kWweQjeVr7U6PtlZ8s66d2cxQzPWsBWOESejSfh7JAH Y5Flz+YbkRTT01rrXiNURxrTvkeq+bYAwjE5a7YlTeF0TkEi6/rLL8UchGCVsCPWn0nrrpuD/XSt lhSxvf7sBFk9d/EIuW5mJxSAcVIG3w/t1RoaqYT5yaerlzOUptMB4QwuyHf5pJU9T3/yqL7qmIO4 QdKT++8oecwAO2FqRTRGRICWJ6afP3DUXSdnqRu9JXsH8pqWOl3yoRwLEpLimA== --=-deQTP1Hd9ILuP1KfNbXA-- From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 09:31:40 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id EE4C8A44 for ; Wed, 3 Dec 2014 09:31:40 +0000 (UTC) Received: from smtp.ehu.es (smtp.lg.ehu.es [158.227.0.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A93DD952 for ; Wed, 3 Dec 2014 09:31:40 +0000 (UTC) Received: from smtp.ehu.es (localhost.localdomain [127.0.0.1]) by postfix.imss71 (Postfix) with ESMTP id BBDDD77D2 for ; Wed, 3 Dec 2014 10:22:56 +0100 (CET) Received: from ncc-1701.we.lc.ehu.es (ncc-1701.we.lc.ehu.es [158.227.6.85]) by smtp2 (Postfix) with ESMTPSA id AC84177C7 for ; Wed, 3 Dec 2014 10:22:56 +0100 (CET) From: =?utf-8?Q?Jos=C3=A9_Mar=C3=ADa_Alcaide?= Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Using boot0 to redirect booting to another disk? Message-Id: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> Date: Wed, 3 Dec 2014 10:22:56 +0100 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) X-Mailer: Apple Mail (2.1993) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.4 (smtp2); Wed, 03 Dec 2014 10:22:56 +0100 (CET) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4 (postfix.imss71); Wed, 03 Dec 2014 10:22:56 +0100 (CET) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 09:31:41 -0000 Hi. I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. = When its hard disk controller is configured in SATA AHCI mode, there is = no way to choose the boot disk among those connected to the SATA ports. = I have four HDD attached to the backplane, and another drive connected = to a fifth SATA port originally intended for an optical drive. The four = HDD are arranged in a RAIDZ. Currently the machine is booting from an = USB flash device, but I would like to boot the FreeBSD installed on = fifth drive. The disk controller sees and reports the five disks just = fine, but as I said above, there is no way to choose the fifth disk as a = boot device. I wondered whether I could use boot0 to redirect the boot from a USB = flash device (pendrive or, still better, a microSD) to the fifth drive. = The idea comes from the fact that boot0 shows a "F5 - Drive 2" option = when it detects more than one drive. I tried to understand how boot0 = works reading its source code, and I experimented with the boot0cfg's = "-d disk" and "-o setdrv" options, to no avail. So I decided to ask for = help. :) Is that possible? Any help will be greatly appreciated. -- Jose M. Alcaide From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 09:35:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CC821D51 for ; Wed, 3 Dec 2014 09:35:00 +0000 (UTC) Received: from lucifer.we.lc.ehu.es (lucifer.we.lc.ehu.es [158.227.6.50]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "lucifer.we.lc.ehu.es", Issuer "CA Dpto Electricidad y Electronica" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 51FB39A2 for ; Wed, 3 Dec 2014 09:34:59 +0000 (UTC) Received: from ncc-1701.we.lc.ehu.es (ncc-1701.we.lc.ehu.es [158.227.6.85]) (authenticated bits=0) by lucifer.we.lc.ehu.es (8.13.1/8.13.1) with ESMTP id sB39SBvl005895 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 3 Dec 2014 10:28:12 +0100 (CET) (envelope-from jose@we.lc.ehu.es) From: =?utf-8?Q?Jos=C3=A9_Mar=C3=ADa_Alcaide?= Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Using boot0 to redirect booting to another disk? Message-Id: Date: Wed, 3 Dec 2014 10:28:11 +0100 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) X-Mailer: Apple Mail (2.1993) X-Greylist: Sender succeeded SMTP AUTH authentication, not delayed by milter-greylist-2.0.2 (lucifer.we.lc.ehu.es [158.227.6.50]); Wed, 03 Dec 2014 10:28:12 +0100 (CET) X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 09:35:00 -0000 (I previously sent this post from the wrong email address, so I'm = sending it again. My apologies if it shows twice.) --- Hi. I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. = When its hard disk controller is configured in SATA AHCI mode, there is = no way to choose the boot disk among those connected to the SATA ports. = I have four HDD attached to the backplane, and another drive connected = to a fifth SATA port originally intended for an optical drive. The four = HDD are arranged in a RAIDZ. Currently the machine is booting from an = USB flash device, but I would like to boot the FreeBSD installed on = fifth drive. The disk controller sees and reports the five disks just = fine, but as I said above, there is no way to choose the fifth disk as a = boot device. I wondered whether I could use boot0 to redirect the boot from a USB = flash device (pendrive or, still better, a microSD) to the fifth drive. = The idea comes from the fact that boot0 shows a "F5 - Drive 2" option = when it detects more than one drive. I tried to understand how boot0 = works reading its source code, and I experimented with the boot0cfg's = "-d disk" and "-o setdrv" options, to no avail. So I decided to ask for = help. :) Is that possible? Any help will be greatly appreciated. -- Jose M. Alcaide= From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 12:31:26 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C69D3B97 for ; Wed, 3 Dec 2014 12:31:26 +0000 (UTC) Received: from mail105.syd.optusnet.com.au (mail105.syd.optusnet.com.au [211.29.132.249]) by mx1.freebsd.org (Postfix) with ESMTP id 719DDE73 for ; Wed, 3 Dec 2014 12:31:25 +0000 (UTC) Received: from c122-106-147-133.carlnfd1.nsw.optusnet.com.au (c122-106-147-133.carlnfd1.nsw.optusnet.com.au [122.106.147.133]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id D6E0C1042AAF; Wed, 3 Dec 2014 23:31:16 +1100 (AEDT) Date: Wed, 3 Dec 2014 23:31:14 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: =?utf-8?Q?Jos=C3=A9_Mar=C3=ADa_Alcaide?= Subject: Re: Using boot0 to redirect booting to another disk? In-Reply-To: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> Message-ID: <20141203223116.J43917@besplex.bde.org> References: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.1 cv=BdjhjNd2 c=1 sm=1 tr=0 a=7NqvjVvQucbO2RlWB8PEog==:117 a=PO7r1zJSAAAA:8 a=JzwRw_2MAAAA:8 a=nlC_4_pT8q9DhB4Ho9EA:9 a=cz2ZRIgtxKwA:10 a=wJWlkF7cXJYA:10 a=c3-DdYJoA5YA:10 a=KtOjKTrlClUXFSlKfM4A:9 a=kTAr2wfP8p-esU1_:21 a=rQzgelOvbBWxTc_e:21 a=45ClL6m2LaAA:10 Content-Type: TEXT/PLAIN; charset=X-UNKNOWN; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 12:31:27 -0000 On Wed, 3 Dec 2014, [utf-8] Jos=C3=A9 Mar=C3=ADa Alcaide wrote: > I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. When= its hard disk controller is configured in SATA AHCI mode, there is no way = to choose the boot disk among those connected to the SATA ports. I have fou= r HDD attached to the backplane, and another drive connected to a fifth SAT= A port originally intended for an optical drive. The four HDD are arranged = in a RAIDZ. Currently the machine is booting from an USB flash device, but = I would like to boot the FreeBSD installed on fifth drive. The disk control= ler sees and reports the five disks just fine, but as I said above, there i= s no way to choose the fifth disk as a boot device. > > I wondered whether I could use boot0 to redirect the boot from a USB flas= h device (pendrive or, still better, a microSD) to the fifth drive. The ide= a comes from the fact that boot0 shows a "F5 - Drive 2" option when it dete= cts more than one drive. I tried to understand how boot0 works reading its = source code, and I experimented with the boot0cfg's "-d disk" and "-o setdr= v" options, to no avail. So I decided to ask for help. :) > > Is that possible? Any help will be greatly appreciated. boot0 wants to chain to the next drive by loading the boot block (which contains the new boot program and partition table). Loading a new boot program is usually exactly what is not wanted. It means that to boot from the fifth drive (Drive 4 (?)) starting from the first drive (Drive 0 (?)), you not only have to hit F5 4 times, but you must put a FreeBSD boot0 on all disks chained through. Even when you know this, it is easy to forget it and chain to nowhere (a data disk with a dummy boot block on it), or better yet, to a disk with a Windows boot loader on it that forcinly boots Windows). The BIOS must support all the disks chained through and the final boot disk of course. Your best chance using boot0 is to get the BIOS to boot from the 4th drive so that the 5th drive is only 1 chaining step away. Removable drives probably wouldn't work well for this. They might be numbered after the 5 fixed drives so they would be even further away. Perhaps the BIOS renumbers all the drives, especially when you don't want it to. I don't see how -d drive can work for removable or renumbered drives. It just allows (when -o setdrv is also configured) overriding the default drive number. There is no standard way to determine to current drive number and it is sometimes misguessed. -d drive -o setdrv gives a way to force a fixed number. But when the drive is removable or renumbered, no fixed number can work. -d is also useless for booting from the 5th drive starting from another drive, except possibly using hacks like giving the drives the same partition table -- then the current partition table is correct for the new drive and lieing to the BIOS about the current drive number makes it load the next boot block (normally boot2) from the new drive. This bug is partly due to boot0 being optimized for space. It uses almost identical code to chain to the next drive as it does to chain to a boot block on the current drive. More practically, don't use boot0 for this. Boot as far as boot2 from some drive supported for booting by the BIOS, then go from there to the final drive. This requires a tiny FreeBSD file system on the boot drive to hold /boot.config. Bruce From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 13:37:33 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7F7682CB for ; Wed, 3 Dec 2014 13:37:33 +0000 (UTC) Received: from smtp.ehu.es (smtp.lg.ehu.es [158.227.0.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 05B118B9 for ; Wed, 3 Dec 2014 13:37:32 +0000 (UTC) Received: from smtp.ehu.es (localhost.localdomain [127.0.0.1]) by postfix.imss71 (Postfix) with ESMTP id 037A88B8A; Wed, 3 Dec 2014 14:37:29 +0100 (CET) Received: from ncc-1701.we.lc.ehu.es (ncc-1701.we.lc.ehu.es [158.227.6.85]) by smtp2 (Postfix) with ESMTPSA id D74708B55; Wed, 3 Dec 2014 14:37:28 +0100 (CET) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) Subject: Re: Using boot0 to redirect booting to another disk? From: =?utf-8?Q?Jos=C3=A9_Mar=C3=ADa_Alcaide?= In-Reply-To: <20141203223116.J43917@besplex.bde.org> Date: Wed, 3 Dec 2014 14:37:28 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> <20141203223116.J43917@besplex.bde.org> To: Bruce Evans X-Mailer: Apple Mail (2.1993) X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.4 (smtp2); Wed, 03 Dec 2014 14:37:28 +0100 (CET) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4 (postfix.imss71); Wed, 03 Dec 2014 14:37:29 +0100 (CET) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 13:37:33 -0000 > El 3/12/2014, a las 13:31, Bruce Evans = escribi=C3=B3: >=20 > On Wed, 3 Dec 2014, [utf-8] Jos=C3=A9 Mar=C3=ADa Alcaide wrote: >=20 >> I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. = When its hard disk controller is configured in SATA AHCI mode, there is = no way to choose the boot disk among those connected to the SATA ports. = I have four HDD attached to the backplane, and another drive connected = to a fifth SATA port originally intended for an optical drive. The four = HDD are arranged in a RAIDZ. Currently the machine is booting from an = USB flash device, but I would like to boot the FreeBSD installed on = fifth drive. The disk controller sees and reports the five disks just = fine, but as I said above, there is no way to choose the fifth disk as a = boot device. >>=20 >> I wondered whether I could use boot0 to redirect the boot from a USB = flash device (pendrive or, still better, a microSD) to the fifth drive. = The idea comes from the fact that boot0 shows a "F5 - Drive 2" option = when it detects more than one drive. I tried to understand how boot0 = works reading its source code, and I experimented with the boot0cfg's = "-d disk" and "-o setdrv" options, to no avail. So I decided to ask for = help. :) > >> Is that possible? Any help will be greatly appreciated. >=20 > boot0 wants to chain to the next drive by loading the boot block = (which > contains the new boot program and partition table). Loading a new = boot > program is usually exactly what is not wanted. It means that to boot > from the fifth drive (Drive 4 (?)) starting from the first drive = (Drive 0 > (?)), you not only have to hit F5 4 times, but you must put a FreeBSD > boot0 on all disks chained through. Even when you know this, it is > easy to forget it and chain to nowhere (a data disk with a dummy boot > block on it), or better yet, to a disk with a Windows boot loader on = it > that forcinly boots Windows). Yes, after reading boot0.S I suspected that it worked in that way. I was = even thinking about modifying boot0.S in order to go straight to the = fifth drive. > Your best chance using boot0 is to get the BIOS to boot from the 4th > drive so that the 5th drive is only 1 chaining step away. Removable > drives probably wouldn't work well for this. They might be numbered > after the 5 fixed drives so they would be even further away. Perhaps > the BIOS renumbers all the drives, especially when you don't want it > to. The BIOS does not permit that. If the SATA AHCI controller is selected = as boot device, it reads the first sector only from the first SATA disk. > More practically, don't use boot0 for this. Boot as far as boot2 from > some drive supported for booting by the BIOS, then go from there to = the > final drive. This requires a tiny FreeBSD file system on the boot > drive to hold /boot.config. In fact that was my first approach to this problem: I created a MBR = table (with MBR boot sector), a active slice, a BSD label inside, a BSD = partition and the boot1+boot2 code, and a tiny file system with = /boot.config. However, I was not able to redirect the boot using the = syntax bios_drive:interface(unit,[slice,]part)filename Surely I was using it in the wrong way. Also, ideally I would like to = use GPT in the fifth disk, and I'm afraid that boot2 doesn't support = booting from GPT partitions (or it does?). Thank you very much! -- Jose M. Alcaide= From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 15:02:18 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7BD04716 for ; Wed, 3 Dec 2014 15:02:18 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 0EC7D606 for ; Wed, 3 Dec 2014 15:02:17 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.9/8.14.9) with ESMTP id sB3Eabe0090802 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 3 Dec 2014 07:36:37 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.9/8.14.9/Submit) with ESMTP id sB3Eaan4090799; Wed, 3 Dec 2014 07:36:37 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Wed, 3 Dec 2014 07:36:36 -0700 (MST) From: Warren Block To: =?ISO-8859-15?Q?Jos=E9_Mar=EDa_Alcaide?= Subject: Re: Using boot0 to redirect booting to another disk? In-Reply-To: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> Message-ID: References: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> User-Agent: Alpine 2.11 (BSF 23 2013-08-11) MIME-Version: 1.0 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Wed, 03 Dec 2014 07:36:37 -0700 (MST) Content-Type: TEXT/PLAIN; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 15:02:18 -0000 On Wed, 3 Dec 2014, José María Alcaide wrote: > I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. > When its hard disk controller is configured in SATA AHCI mode, there > is no way to choose the boot disk among those connected to the SATA > ports. I have four HDD attached to the backplane, and another drive > connected to a fifth SATA port originally intended for an optical > drive. The four HDD are arranged in a RAIDZ. Currently the machine is > booting from an USB flash device, but I would like to boot the FreeBSD > installed on fifth drive. The disk controller sees and reports the > five disks just fine, but as I said above, there is no way to choose > the fifth disk as a boot device. Because the BIOS assumes that fifth drive is always a CD? And presumably that fifth connection is lower bandwidth so switching one of the ZFS drives and the boot drives could impact array performance. > I wondered whether I could use boot0 to redirect the boot from a USB > flash device (pendrive or, still better, a microSD) to the fifth > drive. The idea comes from the fact that boot0 shows a "F5 - Drive 2" > option when it detects more than one drive. I tried to understand how > boot0 works reading its source code, and I experimented with the > boot0cfg's "-d disk" and "-o setdrv" options, to no avail. So I > decided to ask for help. :) boot0 is very limited, and it is not required to stick with only FreeBSD utilities on the USB drive since it is just loading the boot manager. Consider one of the more capable boot managers like Plop, SYSLINUX, or Grub2. http://www.plop.at/en/bootmanager/index.html http://www.syslinux.org/wiki/index.php/The_Syslinux_Project or sysutils/syslinux https://www.gnu.org/software/grub/ or sysutils/grub2 From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 17:44:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A59B9DD for ; Wed, 3 Dec 2014 17:44:24 +0000 (UTC) Received: from smtp.ehu.es (smtp.lg.ehu.es [158.227.0.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 5B5351C9 for ; Wed, 3 Dec 2014 17:44:23 +0000 (UTC) Received: from smtp.ehu.es (localhost.localdomain [127.0.0.1]) by postfix.imss71 (Postfix) with ESMTP id 89B4025C6E; Wed, 3 Dec 2014 18:44:18 +0100 (CET) Received: from [10.0.1.17] (229.83-213-67.dynamic.clientes.euskaltel.es [83.213.67.229]) by smtp1 (Postfix) with ESMTPSA id B8F7124977; Wed, 3 Dec 2014 18:44:17 +0100 (CET) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (1.0) Subject: Re: Using boot0 to redirect booting to another disk? From: =?utf-8?Q?Jos=C3=A9_Mar=C3=ADa_Alcaide?= X-Mailer: iPad Mail (12B435) In-Reply-To: Date: Wed, 3 Dec 2014 18:44:18 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <9F8C856D-40DF-44EF-9FA9-6AC7D513C5D7@ehu.es> References: <27E65CD9-4C97-4528-B218-A01EDB1B8CB1@ehu.es> To: Warren Block X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.4 (smtp1 [0.0.0.0]); Wed, 03 Dec 2014 18:44:18 +0100 (CET) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4 (postfix.imss71 [0.0.0.0]); Wed, 03 Dec 2014 18:44:18 +0100 (CET) Cc: "freebsd-fs@freebsd.org" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 17:44:24 -0000 > El 3/12/2014, a las 15:36, Warren Block escribi=C3=B3= : >=20 >> On Wed, 3 Dec 2014, Jos=C3=A9 Mar=C3=ADa Alcaide wrote: >>=20 >> I have an HP Proliant Microserver Gen8. Nice machine but picky BIOS. When= its hard disk controller is configured in SATA AHCI mode, there is no way t= o choose the boot disk among those connected to the SATA ports. I have four H= DD attached to the backplane, and another drive connected to a fifth SATA po= rt originally intended for an optical drive. The four HDD are arranged in a R= AIDZ. Currently the machine is booting from an USB flash device, but I would= like to boot the FreeBSD installed on fifth drive. The disk controller sees= and reports the five disks just fine, but as I said above, there is no way t= o choose the fifth disk as a boot device. >=20 > Because the BIOS assumes that fifth drive is always a CD? And presumably t= hat fifth connection is lower bandwidth so switching one of the ZFS drives a= nd the boot drives could impact array performance. No, the BIOS does not offer any choice among drives connected to the SATA co= ntroller in AHCI mode, period. When it tries to boot from the SATA controlle= r, it reads the boot sector solely from the drive installed in the first bay= . As I said, it's a picky BIOS.=20 >> I wondered whether I could use boot0 to redirect the boot from a USB flas= h device (pendrive or, still better, a microSD) to the fifth drive. The idea= comes from the fact that boot0 shows a "F5 - Drive 2" option when it detect= s more than one drive. I tried to understand how boot0 works reading its sou= rce code, and I experimented with the boot0cfg's "-d disk" and "-o setdrv" o= ptions, to no avail. So I decided to ask for help. :) >=20 > boot0 is very limited, and it is not required to stick with only FreeBSD u= tilities on the USB drive since it is just loading the boot manager. Conside= r one of the more capable boot managers like Plop, SYSLINUX, or Grub2. I considered GRUB2, but shortly after I began to read the docs I suffered a "= brain fried" exception (and dumped core). I didn't know those other boot man= agers; thanks for the references. Cheers, -- Jose M. Alcaide= From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 19:55:24 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1974B9A9 for ; Wed, 3 Dec 2014 19:55:24 +0000 (UTC) Received: from mail-pa0-f49.google.com (mail-pa0-f49.google.com [209.85.220.49]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E35D4310 for ; Wed, 3 Dec 2014 19:55:23 +0000 (UTC) Received: by mail-pa0-f49.google.com with SMTP id eu11so16154202pac.8 for ; Wed, 03 Dec 2014 11:55:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:content-type:content-transfer-encoding; bh=NJWKBj7/7t0ejiY5XExhB48LOt+6bMDCXHiXxtgwM7s=; b=b13P8SBoVY9g5Jw8iUdBRea/DtcTdvAfvdc/csuvOKL8jI6Bfq2YLNxXnh2u0oc9c+ MHIWWbXsDBQFC8K2XgrKsDD8rNnmRHtgfdLgIYTTryWeKWUpaVAUAN1P4zbMFFc4JLly R3GVmH2Ee3sFB3Kg+qogVjFUOsiYn+grU3noIVSkg2Mjb1Q/y51dXlmTho8K3ibK9NuX MXUs2kZde7x5k+5+DlNa1HmEgqOSOutkhJWLTAZyFTYR1PBfnAgW4jNp2eb+sMGEVYsG ne7CvyrcgdspvgE41j5BdSsXatd1Volq7ESp2DAX2Af39i6XjxgGKLgXL0J22n5Pklvw B2Eg== X-Gm-Message-State: ALoCoQmv44XuCNR0Xxy7Z27tWFQpwmOy8kJtM1+QX2QbdzaKVRIn668uHQSBCB3E4xOZkKfBIoHD X-Received: by 10.66.161.197 with SMTP id xu5mr11562302pab.3.1417636517385; Wed, 03 Dec 2014 11:55:17 -0800 (PST) Received: from Michaels-MacBook-Pro.local (c-98-246-202-204.hsd1.or.comcast.net. [98.246.202.204]) by mx.google.com with ESMTPSA id kj3sm7714041pdb.85.2014.12.03.11.55.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 03 Dec 2014 11:55:16 -0800 (PST) Message-ID: <547F6AA2.2050404@callfortesting.org> Date: Wed, 03 Dec 2014 11:55:14 -0800 From: Michael Dexter User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Strange ACL issue on ZFS Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 19:55:24 -0000 Hello all, I have a v5000 pool under FreeNAS onto which I have copied data from both HFS and NTFS. As such, the data does not appear to have ACLs and is throwing errors: # ls -l 4.4BSD* ls: 4.4BSD-Lite2.tar.gz: No such file or directory -rw-r--r-- 1 501 staff 46375814 Sep 18 2004 4.4BSD-Lite2.tar.gz "Not there, yet there." chmod o+r 4.4BSD-Lite2.tar.gz chmod: 4.4BSD-Lite2.tar.gz: No such file or directory # getfacl 4.4BSD-Lite2.tar.gz # file: 4.4BSD-Lite2.tar.gz # owner: dexter # group: staff getfacl: 4.4BSD-Lite2.tar.gz: No such file or directory Based on what I've read to set an ACL: # setfacl -m owner@:rwxpdDaARWcCos:fd:allow 4.4BSD-Lite2.tar.gz setfacl: 4.4BSD-Lite2.tar.gz: acl_get_file() failed: No such file or directory Perhaps I cannot set an ACL because I do not have an ACL? I have found that I can copy the data to correct this but I prefer not: getfacl 4.4BSD-Lite2.tar.gz.copy # file: 4.4BSD-Lite2.tar.gz.copy # owner: root # group: staff owner@:rw-p--aARWcCos:------:allow group@:r-----a-R-c--s:------:allow everyone@:r-----a-R-c--s:------:allow (No complaint about it being missing) Any suggestions on how I can correct this for a large collection of data? Thank you, Michael From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 20:16:52 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DD827FC0; Wed, 3 Dec 2014 20:16:52 +0000 (UTC) Received: from mail.takwa.de (antares.takwa.de [5.9.72.166]) by mx1.freebsd.org (Postfix) with ESMTP id A076D7D1; Wed, 3 Dec 2014 20:16:51 +0000 (UTC) Received: by mail.takwa.de (Postfix, from userid 65534) id 6D215107; Wed, 3 Dec 2014 21:10:59 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.takwa.de X-Spam-Level: X-Spam-Status: No, score=-1.5 required=5.0 tests=ALL_TRUSTED,BAYES_05 autolearn=disabled version=3.4.0 Received: from [192.168.10.5] (unknown [62.246.110.10]) by mail.takwa.de (Postfix) with ESMTPSA id 9D9B8105; Wed, 3 Dec 2014 21:10:58 +0100 (CET) Message-ID: <547F6E54.5020706@takwa.de> Date: Wed, 03 Dec 2014 21:11:00 +0100 From: Michael Schmiedgen User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: "stable@freebsd.org" , freebsd-fs@freebsd.org Subject: ZFS 'mount error 5' hits production server Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 20:16:53 -0000 Hi list, today I upgraded one of our servers from 10.0 to 10.1 with freebsd-update. GENERIC kernel, ZFS 2-way mirror, GPT, nothing special. After booting kernel, when trying to mount the root file system I get: 'cannot mount, error 5' or something. Strangely I got this error a few weeks ago at home, running CURRENT with custom kernel: https://lists.freebsd.org/pipermail/freebsd-current/2014-October/052900.html I created the pools and datasets manually via console. Has anything changed? With legacy mountpoint, or that one needs a boot dataset, or something? Thanks, Michael Configuration is: gpart show: => 34 5860533101 ada0 GPT (2.7T) 34 256 1 freebsd-boot (128K) 290 6 - free - (3.0K) 296 8388608 2 freebsd-swap (4.0G) 8388904 5662310400 3 freebsd-zfs (2.6T) 5670699304 189833831 - free - (91G) => 34 5860533101 ada1 GPT (2.7T) 34 256 1 freebsd-boot (128K) 290 6 - free - (3.0K) 296 8388608 2 freebsd-swap (4.0G) 8388904 5662310400 3 freebsd-zfs (2.6T) 5670699304 189833831 - free - (91G) zpool status: NAME PROPERTY VALUE SOURCE tank size 2.62T - tank capacity 3% - tank altroot - default tank health ONLINE - tank guid XXX default tank version - default tank bootfs tank local tank delegation on default tank autoreplace off default tank cachefile - default tank failmode wait default tank listsnapshots off default tank autoexpand off default tank dedupditto 0 default tank dedupratio 1.00x - tank free 2.53T - tank allocated 99.0G - tank readonly off - tank comment - default tank expandsize 0 - tank freeing 0 default tank fragmentation 0% default tank leaked 0 default tank feature@async_destroy enabled local tank feature@empty_bpobj active local tank feature@lz4_compress enabled local tank feature@multi_vdev_crash_dump enabled local tank feature@spacemap_histogram disabled local tank feature@enabled_txg disabled local tank feature@hole_birth disabled local tank feature@extensible_dataset disabled local tank feature@embedded_data disabled local tank feature@bookmarks disabled local tank feature@filesystem_limits disabled local zfs get all tank: NAME PROPERTY VALUE SOURCE tank type filesystem - tank creation Aug 2012 - tank used 98.9G - tank available 2.49T - tank referenced 8.34G - tank compressratio 1.00x - tank mounted yes - tank quota none default tank reservation none default tank recordsize 128K default tank mountpoint legacy local tank sharenfs off default tank checksum on default tank compression off default tank atime off local tank devices on default tank exec on default tank setuid on default tank readonly off default tank jailed off default tank snapdir hidden default tank aclmode discard default tank aclinherit restricted default tank canmount on default tank xattr off temporary tank copies 1 default tank version 5 - tank utf8only off - tank normalization none - tank casesensitivity sensitive - tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank primarycache all default tank secondarycache all default tank usedbysnapshots 4.53G - tank usedbydataset 8.34G - tank usedbychildren 86.1G - tank usedbyrefreservation 0 - tank logbias latency default tank dedup off default tank mlslabel - tank sync standard default tank refcompressratio 1.00x - tank written 3.10G - tank logicalused 91.7G - tank logicalreferenced 6.71G - tank volmode default default tank filesystem_limit none default tank snapshot_limit none default tank filesystem_count none default tank snapshot_count none default tank redundant_metadata all default -- ___________________________ Michael Schmiedgen, BSc Senior Software Engineer Takwa GmbH Friedrich-List-Str. 36 99096 Erfurt GERMANY Tel +49 361 6534096 Fax +49 361 6534097 Mail schmiedgen@takwa.de Web http://www.takwa.de/ ___________________________ Amtsgericht Jena HRB 112964 Geschäftsführung: Ingo Buchholz From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 20:54:14 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1B85571A for ; Wed, 3 Dec 2014 20:54:14 +0000 (UTC) Received: from mail-oi0-x22c.google.com (mail-oi0-x22c.google.com [IPv6:2607:f8b0:4003:c06::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D49DFBB7 for ; Wed, 3 Dec 2014 20:54:13 +0000 (UTC) Received: by mail-oi0-f44.google.com with SMTP id e131so11526655oig.31 for ; Wed, 03 Dec 2014 12:54:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=VenD/okXPaH6bW0uo49Usnr7CwYKT1U/MyaKfXiuriA=; b=bctj0MnMzBeIahHPvTRjarr0D2G6D+BftqMoTekO+jUQl7CuuwducaXIrJegVZ+jZe x9pHpdzZpH4QTLJORK+siUh+0TaL8PnnRFuJKrUuok10IGk1U87MyVhYfIPFbjr2KXHn ZTQEEXKOZKWbPKYlBWZa4bh1J/PsQfdwamFOKLlVabSj3e4mRJrp4y9cVKIHHO0pTx5h lQCHu3ZRKpOvATa8/GFDYMgL/hUAX8UzqCX72ukyJdqkuVvYFVZbpTyrNXbtXz6xrtKQ L6wMC3E+SaVfEpd8CS8NghcF/0T1n0udNGaM1f7352oMRnNHzz1OUomjUrwEO7UUzkD1 ITAA== MIME-Version: 1.0 X-Received: by 10.202.46.71 with SMTP id u68mr4271486oiu.20.1417640053194; Wed, 03 Dec 2014 12:54:13 -0800 (PST) Received: by 10.76.0.138 with HTTP; Wed, 3 Dec 2014 12:54:13 -0800 (PST) Date: Wed, 3 Dec 2014 15:54:13 -0500 Message-ID: Subject: zdb -R broken. From: Zaphod Beeblebrox To: freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 20:54:14 -0000 Since zdb -R was crashing on my broken ZFS filesystem, I created a brand-new zfs filesystem to test. New zpool, new zfs filesystem (not the root of the pool) and a single file. Where zdb -dddd gives: [1:40:340]root@virtual:/vr1/tmp/diag> less vr1-test-8-dddddddd.txt Dataset vr1/test [ZPL], ID 381, cr_txg 265793, 2.72M, 8 objects, rootbp DVA[0]=<0:294eccbc000:3000> DVA[1]=<0:6a4e3c07000:3000> [L0 DMU objset] fletcher4 lz4 LE contiguous unique double size=800L/200P birth=265800L/265800P fill=8 cksum=111b51647a:6d5cbdc2e81:1680f8b437c3f:32c54b0caa57b7 Object lvl iblk dblk dsize lsize %full type 8 2 16K 128K 2.52M 2.50M 100.00 ZFS plain file (K=inherit) (Z=inherit) 168 bonus System attributes dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 19 path /words uid 0 gid 0 atime Mon Nov 24 15:20:15 2014 mtime Mon Nov 24 15:20:15 2014 ctime Mon Nov 24 15:20:15 2014 crtime Mon Nov 24 15:20:15 2014 gen 265800 mode 100444 size 2493514 parent 4 links 1 pflags 40800000004 Indirect blocks: 0 L1 0:294ecc86000:3000 0:6a4e3bd1000:3000 4000L/400P F=20 B=265800/265800 0 L0 0:294ec902000:2d000 20000L/20000P F=1 B=265800/265800 20000 L0 0:294ec92f000:2d000 20000L/20000P F=1 B=265800/265800 (and so on), zdb -R does: [1:43:343]root@virtual:/vr1/tmp/diag> zdb -AAA -R vr1 0:294ec902000:2d000:g Found vdev type: raidz Assertion failed: (zio->io_error == 0 || (zio->io_flags & ZIO_FLAG_CANFAIL)), file /usr/src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line 3297. Abort trap (core dumped) ... and specifically... it's the 'g' flag that's bad, but I lack insight as to how to compile all these libraries with debug information: (gdb) bt #0 0x0000000801cb26ca in thr_kill () from /lib/libc.so.7 #1 0x0000000801d87149 in abort () from /lib/libc.so.7 #2 0x0000000801920e21 in zio_init () from /lib/libzpool.so.2 #3 0x0000000801927e0e in zbookmark_is_before () from /lib/libzpool.so.2 #4 0x0000000801922df7 in zio_execute () from /lib/libzpool.so.2 #5 0x0000000801927f11 in zbookmark_is_before () from /lib/libzpool.so.2 #6 0x0000000801922df7 in zio_execute () from /lib/libzpool.so.2 #7 0x0000000801927f11 in zbookmark_is_before () from /lib/libzpool.so.2 #8 0x0000000801922df7 in zio_execute () from /lib/libzpool.so.2 #9 0x0000000801927f11 in zbookmark_is_before () from /lib/libzpool.so.2 #10 0x0000000801922df7 in zio_execute () from /lib/libzpool.so.2 #11 0x0000000801927f11 in zbookmark_is_before () from /lib/libzpool.so.2 #12 0x0000000801922df7 in zio_execute () from /lib/libzpool.so.2 #13 0x000000080191b8d9 in taskq_create () from /lib/libzpool.so.2 #14 0x0000000800e814f5 in pthread_create () from /lib/libthr.so.3 #15 0x00007ffff75bc000 in ?? () Cannot access memory at address 0x7ffff77bc000 help? From owner-freebsd-fs@FreeBSD.ORG Wed Dec 3 22:39:25 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id ACB8DF91 for ; Wed, 3 Dec 2014 22:39:25 +0000 (UTC) Received: from mail-oi0-x22a.google.com (mail-oi0-x22a.google.com [IPv6:2607:f8b0:4003:c06::22a]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 713EB8D6 for ; Wed, 3 Dec 2014 22:39:25 +0000 (UTC) Received: by mail-oi0-f42.google.com with SMTP id v63so11546693oia.15 for ; Wed, 03 Dec 2014 14:39:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=HmxySS/a/fGgCBzG2/U00NRqQZc07QB/uqxK8Eh7STA=; b=hlZ477Q7mTcZeet43XR2C/PuKA4QWqNn5kCtT3NZTIU1eDhbCa8Zm/nN+jvurr2JQi +w5b3Knd68MbZisuXz6g+VxG+kzZqrYNYfpxPfK27fzMc2JSr9+8Py7tauCu6lfjLViU RDCGJiWqBndzRxwfAD/q0CbDKQeT6EnpCZMG7m+9kktkC+pCW/ZtWSrn1k77+oWkcqWG 6ZKKiOX+SndcJryxihPp6yC5BfTyrxLUfOGuugR1NyPjfyG9IfuUJUaMGoOr51tWFObb Y0eh6Ie9pL68Mukws9cKNl4o+Eq3jCc3DiL4fPdMgQ5gp5N00Qc86BM305ITggosceW5 vgGQ== MIME-Version: 1.0 X-Received: by 10.182.65.105 with SMTP id w9mr4793991obs.60.1417646364811; Wed, 03 Dec 2014 14:39:24 -0800 (PST) Received: by 10.76.0.138 with HTTP; Wed, 3 Dec 2014 14:39:24 -0800 (PST) Date: Wed, 3 Dec 2014 17:39:24 -0500 Message-ID: Subject: Lies that ZFS/ZDB tell us. From: Zaphod Beeblebrox To: freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Dec 2014 22:39:25 -0000 So... I'm trying to get down to the ZFS on-disk format and read something. Since I've been having trouble with that, I've made a test zpool, populated it with one zfs filesystem and one file (a copy of usr/dict/words). The zdb -dddd starts off: 0 L1 0:434000:4000 0:780114000:4000 4000L/600P F=20 B=12/12 0 L0 0:114000:28000 20000L/20000P F=1 B=12/12 20000 L0 0:13c000:28000 20000L/20000P F=1 B=12/12 (you can see the whole file at http://pastebin.ca/2881508) So... the L1 bit is the indirect block, and the L0 (I've shown two) say that in commit 12, this file was laid down in 20000 (hex --- 128k) blocks. I'll upload the first two blocks above for your perusal, but the weird thing is that they contain more than the given 128k (20000 hex) data. Both blocks have text data from 0 to FFFF, then 0's from 10000 to 11FFF, then text from 12000 to !BFFF, then random bytes (I assume parity) from 1C000 to 1DFFF, then text again from 1E000 to 28FFF. block1: https://uk.eicat.ca/owncloud/public.php?service=files&t=f91b4abc1debb2b3d240d450b4b8c426 block2: https://uk.eicat.ca/owncloud/public.php?service=files&t=0e82fa1d1635bd26853e018fb0b36189 ... so that's 24000 (hex) bytes, not 20000 (the 20000L/20000P bit says it should be 20000 bytes of content). There's also only 2000 (hex) of parity ... which (with 8 disks) is only enough to protect E000 bytes of payload. What gives? is ZDB -R completely broken? From owner-freebsd-fs@FreeBSD.ORG Thu Dec 4 19:05:13 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id F14348D5 for ; Thu, 4 Dec 2014 19:05:13 +0000 (UTC) Received: from mail-oi0-x234.google.com (mail-oi0-x234.google.com [IPv6:2607:f8b0:4003:c06::234]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B79D45E5 for ; Thu, 4 Dec 2014 19:05:13 +0000 (UTC) Received: by mail-oi0-f52.google.com with SMTP id h136so12884172oig.11 for ; Thu, 04 Dec 2014 11:05:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=If1BBxSVq99pt7BKgg91yfNLiISxy4A9LT7hdcbwS0w=; b=tXmSTc3kOSj2ljqBVQF7HzO+f8gUQ6y+16EI7YkPdqHtcD8kBOVNjQz9tNrIlpYyO9 1kWBXPaJDXDQ8lGZxZUZXdk8oT+AKXK8HhT+x7XM9nm+7Bn0QklcPaw9ivl4M77+Khgv OnpsHS52BuFA9i3B8WzpwbmnxwWGazUsTHE+l6cEb9iuyqpODpaE1em6wskNWl3HhUyu 80csOirbRlqCaB9bA7AXLkziF6WbVPEUb/zkWkmfxfqfliI9+tUGNDtFy3sTOlQI5GDV RyA00n03FzNpqUo1ufZvq2PTJIxW1YTDbq+hHNoPYBx0W6ATBZQozJZSk8Ybg3IHqss0 YSFw== MIME-Version: 1.0 X-Received: by 10.202.85.80 with SMTP id j77mr7447355oib.97.1417719913134; Thu, 04 Dec 2014 11:05:13 -0800 (PST) Received: by 10.76.0.138 with HTTP; Thu, 4 Dec 2014 11:05:13 -0800 (PST) Date: Thu, 4 Dec 2014 14:05:13 -0500 Message-ID: Subject: ASHIFT=13 by default. From: Zaphod Beeblebrox To: freebsd-fs Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Dec 2014 19:05:14 -0000 So... I think I answered at least a small part of my confusion with the test ZFS datasets. I created them with 'zpool create vrx raidz /dev/zvol/*' ... and it would appear that the default behavior for this is to assign ASHIFT to be 13 ... which makes these test pools rater unrepresentative of ordinary zfs pools. That means that many people using test ZFS pools are testing with 8k blocks (not even just 4k blocks). While I can immediately understand what I need to do to get a representative test, I'm posting because other people may have been similarly confused. From owner-freebsd-fs@FreeBSD.ORG Thu Dec 4 20:23:35 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DB020A6F for ; Thu, 4 Dec 2014 20:23:35 +0000 (UTC) Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 72D53EB9 for ; Thu, 4 Dec 2014 20:23:35 +0000 (UTC) Received: by mail-wi0-f173.google.com with SMTP id r20so36021715wiv.6 for ; Thu, 04 Dec 2014 12:23:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=ufMAIIpcKsi1FZ2Cq3eQCkn/n+dIooeokqZN8sokmyE=; b=Km2iAkIUmORGPSSaK/BDpRBz0u/JzfhYd2RIFaBKZcwjse7vplsHHi8TucU1HZzhSJ ZYSKS5im6OtOysm1SVl0zOx2MKysj8G1BQxiSgfYzDYXURBTK/rNsrX2qPaYk167DxUt vszOwj+f+D0bDEXoDbfwIUzcoyJmlJjcyHYiDQANa4owIzT19Ejg8e9cFmTkjI4X4O/B CFBfg8PYMNsLdu3NClp8IInKS5EZMGkBRx1WRrKgjFZM5l8cnpNdfh5BgknrJrI2vIf7 XXyhKIeh63IX1guTf4gj/q96arAFmUXQ1/XDnQynVfQomvJW+BdaCDFNFvAYY0rRuquh a6JQ== X-Gm-Message-State: ALoCoQmUzqME1P77vEniLaVpG9oxyFfDHLpIVScjJpqDS/Y3lxp5b6tK8sNYqUqKzb15s48l5auB X-Received: by 10.194.187.235 with SMTP id fv11mr18865301wjc.16.1417724607696; Thu, 04 Dec 2014 12:23:27 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id n3sm56410564wiw.5.2014.12.04.12.23.26 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Dec 2014 12:23:26 -0800 (PST) Message-ID: <5480C336.8020901@multiplay.co.uk> Date: Thu, 04 Dec 2014 20:25:26 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ASHIFT=13 by default. References: In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Dec 2014 20:23:35 -0000 If thats the case the underlying storage will be reporting a logical block size of 8k. Assuming that's the case then this is expected / correct behavior. On 04/12/2014 19:05, Zaphod Beeblebrox wrote: > So... I think I answered at least a small part of my confusion with the > test ZFS datasets. I created them with 'zpool create vrx raidz > /dev/zvol/*' ... and it would appear that the default behavior for this is > to assign ASHIFT to be 13 ... which makes these test pools rater > unrepresentative of ordinary zfs pools. > > That means that many people using test ZFS pools are testing with 8k blocks > (not even just 4k blocks). While I can immediately understand what I need > to do to get a representative test, I'm posting because other people may > have been similarly confused. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Thu Dec 4 22:47:41 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 632A52F8 for ; Thu, 4 Dec 2014 22:47:41 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [IPv6:2001:470:1:117::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4A3BC1A4 for ; Thu, 4 Dec 2014 22:47:41 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id E14A413AC0; Thu, 4 Dec 2014 14:47:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1417733260; x=1417747660; bh=6RJXJf+NNdXNMyWcGm1lEjyYVoofTGzbYysMxnNmLx4=; h=Date:From:Reply-To:To:Subject:References:In-Reply-To; b=1s5K20i+vlch30xuPQBCT0P3vegqww52A3MU1o+u9svWUaOFZHP6gAy6nC2fMILSS 9SFuuXVW8ADAqYQ1H6d1C7aokxCT4bqbA5aVgnPAitoNn+4rJNm8A5XuBx2zSsQrkz Op7cDmmUlkbgRy/SXX+fev/ew9cGHFmD7hzrbN0c= Message-ID: <5480E48C.6040801@delphij.net> Date: Thu, 04 Dec 2014 14:47:40 -0800 From: Xin Li Reply-To: d@delphij.net Organization: The FreeBSD Project MIME-Version: 1.0 To: Steven Hartland , freebsd-fs@freebsd.org Subject: Re: ASHIFT=13 by default. References: <5480C336.8020901@multiplay.co.uk> In-Reply-To: <5480C336.8020901@multiplay.co.uk> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Dec 2014 22:47:41 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/04/14 12:25, Steven Hartland wrote: > If thats the case the underlying storage will be reporting a > logical block size of 8k. > > Assuming that's the case then this is expected / correct behavior. It is: 8k is the default zvol blocksize. BTW. Why bother to create zpools over zvols? Wouldn't it mean double checksumming? Cheers, > On 04/12/2014 19:05, Zaphod Beeblebrox wrote: >> So... I think I answered at least a small part of my confusion >> with the test ZFS datasets. I created them with 'zpool create >> vrx raidz /dev/zvol/*' ... and it would appear that the default >> behavior for this is to assign ASHIFT to be 13 ... which makes >> these test pools rater unrepresentative of ordinary zfs pools. >> >> That means that many people using test ZFS pools are testing with >> 8k blocks (not even just 4k blocks). While I can immediately >> understand what I need to do to get a representative test, I'm >> posting because other people may have been similarly confused. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs To >> unsubscribe, send any mail to >> "freebsd-fs-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs To > unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.0 (FreeBSD) iQIcBAEBCgAGBQJUgOSJAAoJEJW2GBstM+nsCyQP/12VpHMjMvDTDKzvJW2XeE5/ K4tF6YU8fP6/5MD+YIongCKeUBd9lkRaRH0WU0jdIwH60OlQMniJAngd+4ENw9Bn VBj3QWI7SmU0EQ5xJbxx6P68JjmA0ySZx2cq6qrAZSiE/yNkKfvFphYRNKAuHcdn m9CJBzP+THlVPCioeD++MF3AaQ8jSHQ59HRY2qU49vnrqc5HdNWRo79coh+tYlld xMNGSj9y4/r5LFjI2bUqKBqg5XoAdx6678WtDVq43i7FXTa0OUk/IEHhJDCuJuQ7 kIutG2Dz43ElezoLFSUTqlhR0rotHaHVZEWLb6pz01bb0FOgWP4YAHEaOIoLA2bT GOTjY6T4Y0drc0FBiIGpvNXSweEWxA259h9prr8JIjMnd/kiq3HyznoIxr2HYoPv Hznnli3aBl3I4SKyGOXfxV6qyv7RGIht7jjS94clLl+YcWiIWy68wEV351/yyRb9 ylx5zwqLkoW26r4Z+M0edZRtlsoVaRCvoekaQXYENqEsydfQcr4Xjng5ww20t/F5 aXTWa8mXmiPhSGxWaVNh5qY71yPIsPWhIHda0e4Kyo7N0/xBJUTuOHSN1vwInDgk 1ksulExXvIizR53zotzhLDm3wa0H5kxNav9A8KHoZJEtu//dbWb4JMnOn6UvqDfx 3E6GAv+bWgIRuYjtNgue =xpV9 -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Thu Dec 4 22:53:00 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 93C3B4F7 for ; Thu, 4 Dec 2014 22:53:00 +0000 (UTC) Received: from mail-pa0-f42.google.com (mail-pa0-f42.google.com [209.85.220.42]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6785B287 for ; Thu, 4 Dec 2014 22:53:00 +0000 (UTC) Received: by mail-pa0-f42.google.com with SMTP id et14so18970763pad.29 for ; Thu, 04 Dec 2014 14:52:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:subject:message-id:date:to :mime-version; bh=PrlCjIupNP1mTA5a7CPuA2G/GbE39oBU5m5hTMEUew0=; b=TPvmuqw7OJJ2gZ0cGfisOf2KeyrB0UmXb33PJoFxjfXt0XRL+yG3hQ5njXWhQmapPz RXbBcfZ8IZA5I+LA5iZHHoPJRW9uWs2fAd0QnII4VXwoPFJ0iZKqykirf5iqaaVWlWZ1 zkxkvsVQF2d4SxgcG+dM1/hBTEYsdTX3ZfOs2jtAzKAeLN1IwiY0nkeKM4wu1Q/IubCC OvptzFbA24iqgjBU1CxpfuqNqt808RakNcNlQYpIgPkuz4sGUp0meVX/Mw8vt7tJ0YdZ wqc2cJBPJZvflyuOlgKkcIc0bGbrl2uCFEVLPWrVooKQmRcWGKenCVK8qNJlYbX1Cg36 v1Aw== X-Gm-Message-State: ALoCoQlllVClPLFNsDGSACUzrfGdKNfpg6ltBuaA0heyJj2zunmNE6E7XOmE3Fvgtv7TnSvlhA80 X-Received: by 10.70.21.168 with SMTP id w8mr22649181pde.95.1417733574596; Thu, 04 Dec 2014 14:52:54 -0800 (PST) Received: from hobbes.nimgs.com (wsip-184-188-181-148.sd.sd.cox.net. [184.188.181.148]) by mx.google.com with ESMTPSA id kj3sm10911379pdb.85.2014.12.04.14.52.52 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 04 Dec 2014 14:52:53 -0800 (PST) From: Craig Yoshioka X-Google-Original-From: Craig Yoshioka Subject: remove or make reserved ZFS space configurable Message-Id: <1D872444-CF75-48FF-BFDE-51885A3BBF9B@nimgs.com> Date: Thu, 4 Dec 2014 14:52:51 -0800 To: freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 8.1 \(1993\)) X-Mailer: Apple Mail (2.1993) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Dec 2014 22:53:00 -0000 I saw an earlier thread about an update that added (invisible) reserved = space for ZFS Pools. I use ZFS as the filesystem for my backup disks. This allows me to = occasionally rotate through old backup drives and scrub them to detect = damage without having to implement such detection in my backup system. = I can then re-backup bad files, or reconstruct a backup drive, if = necessary. I configured my backup software to leave ~100GB free (3TB = drives) and I was not amused when I recently mounted a set of backup = drives, and puzzled over why they were now "full=E2=80=9D and if I had = messed up and lost data. And even with the reserved space, while = troubleshooting I tried a rm and it failed (so what's the point of the = reserved space?). I don't care about the performance degradation on these drives... these = drives are 99% single-write. I agree, that having reserved space as a = default is probably good, but why isn't this implemented as a = configurable option (like a default zpool/zfs quota property)? Instead = it looks like I have to wait for an update to make it to release and = then set some kernel option? To prevent accidental filling of a pool, I = usually create a zfs filesystem on each pool with about ~10% reserved = space, is this solution not workable? =20 Thanks, -Craig From owner-freebsd-fs@FreeBSD.ORG Fri Dec 5 00:09:22 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 94948D2C for ; Fri, 5 Dec 2014 00:09:22 +0000 (UTC) Received: from anubis.delphij.net (anubis.delphij.net [64.62.153.212]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 78484C89 for ; Fri, 5 Dec 2014 00:09:22 +0000 (UTC) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 478051A2B7; Thu, 4 Dec 2014 16:09:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1417738156; x=1417752556; bh=JGYb4gAozs0ma0fq0Gx19kZDRVP3if8N9PcKAW0nuiM=; h=Date:From:Reply-To:To:CC:Subject:References:In-Reply-To; b=0X8tQtga3vIoHYVSXZfduc9GNVqjpeYVCPG7sSSQQcXgXyo30wkhmIKz3TlVqnASO KwuzUM5FcjXhupliVJiTAHuhqAy1XwAXuagHAiI4sTqmNRQd2kpE/CY5QCABUTnIXu KnTHIxfvPXwQ/Mt7EKMh/iSht8apWIOKk7Y5nuoA= Message-ID: <5480F7AC.7080908@delphij.net> Date: Thu, 04 Dec 2014 16:09:16 -0800 From: Xin Li Reply-To: d@delphij.net Organization: The FreeBSD Project MIME-Version: 1.0 To: Bob Friesenhahn , d@delphij.net Subject: Re: ASHIFT=13 by default. References: <5480C336.8020901@multiplay.co.uk> <5480E48C.6040801@delphij.net> In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Dec 2014 00:09:22 -0000 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/04/14 16:04, Bob Friesenhahn wrote: > On Thu, 4 Dec 2014, Xin Li wrote: >> >> BTW. Why bother to create zpools over zvols? Wouldn't it mean >> double checksumming? > > Clients of the zvol might corrupt the filesystem stored on it. > The double checksum would still help in that case. Sorry I'm not sure if I have followed -- Are you saying that you are exposing the zvol as e.g. iSCSI extent to a client (which, according to your creation doesn't seem like the case, since you are creating it from the same kernel...) or to a hypervisor like BHyVe? Cheers, - -- Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.0 (FreeBSD) iQIcBAEBCgAGBQJUgPerAAoJEJW2GBstM+nsbzAP/jyKQ+HgEKRq7mDgEspHc9Gz BDfg9onYalbNgU09+ZrKFu0eV75Uzqh6FoBcQR6GPDoB+nIjIv1/Pud+NgGr765y a1BckOIaDBHkAlPa1zpikAFriXowSNjbEtEuSkHzVIOSu4flXphHCPVr4/snAPZe jfsqpoVegWYxFSD+dLW1u+10nOL5hFsaq6qf9cbcDXSjyGnxi+wiLzFlGTiUS/+v UnS6yAqhfL+L21b64vylpyAfISc19SpYogGiDFBooebT2CBdSowAirBY4sEHGjb4 260vOJW7pIodD/KULATo9cXbgP6AI5wd9wQkaWi9wI3W7DaqHYXnS4naSDm+s3a1 T3npD64bnqNyhJMMEI0BzAxrFBTK2o/EEp4crQqxVojGTPRiWhmk0+YlJyap3i0x 93LG8g33MXiKeVJ+7Sbs+akwLxq3ZSBj/v5zBuXY8C3zkanLSIYEItsRTxSuEGjl Ti/7WTF9V+BJx9If53q1QrXQmGHJ3NjNHEXJSwVPH1U8RAUXBbBNW5GnZW4VJ07u z9ZaUYPpBaklJj3/cnqXw1fitI5e7mdoLpTLwgvoCkLuMFqm1w/O3H0qVT1UzT2q UIRM+XCdeieeJqRkm79CWq+6tTo6zhnv8ypXIWicQLGJtzLC6T9suZPTM6EtICzt sJf5zcBSb3pUxZZhfSbG =MXdZ -----END PGP SIGNATURE----- From owner-freebsd-fs@FreeBSD.ORG Fri Dec 5 00:10:51 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C2B23DAA for ; Fri, 5 Dec 2014 00:10:51 +0000 (UTC) Received: from blade.simplesystems.org (blade.simplesystems.org [65.66.246.74]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7219DC98 for ; Fri, 5 Dec 2014 00:10:51 +0000 (UTC) Received: from freddy.simplesystems.org (freddy.simplesystems.org [65.66.246.65]) by blade.simplesystems.org (8.14.4+Sun/8.14.4) with ESMTP id sB5042N6007459; Thu, 4 Dec 2014 18:04:02 -0600 (CST) Date: Thu, 4 Dec 2014 18:04:02 -0600 (CST) From: Bob Friesenhahn X-X-Sender: bfriesen@freddy.simplesystems.org To: d@delphij.net Subject: Re: ASHIFT=13 by default. In-Reply-To: <5480E48C.6040801@delphij.net> Message-ID: References: <5480C336.8020901@multiplay.co.uk> <5480E48C.6040801@delphij.net> User-Agent: Alpine 2.01 (GSO 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.2 (blade.simplesystems.org [65.66.246.90]); Thu, 04 Dec 2014 18:04:03 -0600 (CST) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Dec 2014 00:10:51 -0000 On Thu, 4 Dec 2014, Xin Li wrote: > > BTW. Why bother to create zpools over zvols? Wouldn't it mean double > checksumming? Clients of the zvol might corrupt the filesystem stored on it. The double checksum would still help in that case. Bob -- Bob Friesenhahn bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ From owner-freebsd-fs@FreeBSD.ORG Fri Dec 5 00:16:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id E7183EA3 for ; Fri, 5 Dec 2014 00:16:11 +0000 (UTC) Received: from mail-ob0-x22c.google.com (mail-ob0-x22c.google.com [IPv6:2607:f8b0:4003:c01::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A963BD7C for ; Fri, 5 Dec 2014 00:16:11 +0000 (UTC) Received: by mail-ob0-f172.google.com with SMTP id wn1so14302273obc.31 for ; Thu, 04 Dec 2014 16:16:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=jyllO7F/6ALWqAm5WTY5HmCRzSYH3PRB/rM33AoryuE=; b=ryQHDcGKzYNL4tHs8arSW0tiitzXkeTnDBpjOCfSCMoi3WubKUAaggZY7mKHjxL3on UVfmFpP3XsVH51ouiLxRsfWZFskuB9likjvmxgQR+w52hsffJ2aqaC79HdpzvULNQDwR 1TVlFc6RlnUripOZ4upz6rjtQp0HjGPizbElV7r0lh5hTiWxJvpGWe5z3cuzrn+Ftj5I 3UBfAroaEZfbDhzu1u9SkEF0w8fQyaO2ltDq/SdVc0uOYPv/yeldPkjnvWtFBiGCMK3v 5IGsTRQfRufTkwlDtxmpPp0BE1ojbcvbrZRhtQm83wrCiCN+z8Cs7ShVD8UnsuCb7tMh mIbw== MIME-Version: 1.0 X-Received: by 10.60.117.99 with SMTP id kd3mr2282729oeb.35.1417738571023; Thu, 04 Dec 2014 16:16:11 -0800 (PST) Received: by 10.76.0.138 with HTTP; Thu, 4 Dec 2014 16:16:10 -0800 (PST) In-Reply-To: <5480C336.8020901@multiplay.co.uk> References: <5480C336.8020901@multiplay.co.uk> Date: Thu, 4 Dec 2014 19:16:10 -0500 Message-ID: Subject: Re: ASHIFT=13 by default. From: Zaphod Beeblebrox To: Steven Hartland Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Dec 2014 00:16:12 -0000 Well in this case the "space" was a file created with zfs create -V 10g On Thu, Dec 4, 2014 at 3:25 PM, Steven Hartland wrote: > If thats the case the underlying storage will be reporting a logical block > size of 8k. > > Assuming that's the case then this is expected / correct behavior. > > > On 04/12/2014 19:05, Zaphod Beeblebrox wrote: > >> So... I think I answered at least a small part of my confusion with the >> test ZFS datasets. I created them with 'zpool create vrx raidz >> /dev/zvol/*' ... and it would appear that the default behavior for this is >> to assign ASHIFT to be 13 ... which makes these test pools rater >> unrepresentative of ordinary zfs pools. >> >> That means that many people using test ZFS pools are testing with 8k >> blocks >> (not even just 4k blocks). While I can immediately understand what I need >> to do to get a representative test, I'm posting because other people may >> have been similarly confused. >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > From owner-freebsd-fs@FreeBSD.ORG Fri Dec 5 08:13:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 3E01A934 for ; Fri, 5 Dec 2014 08:13:59 +0000 (UTC) Received: from smtp.unix-experience.fr (195-154-176-227.rev.poneytelecom.eu [195.154.176.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EFD1B310 for ; Fri, 5 Dec 2014 08:13:58 +0000 (UTC) Received: from smtp.unix-experience.fr (unknown [192.168.200.21]) by smtp.unix-experience.fr (Postfix) with ESMTP id C4FD822EAA for ; Fri, 5 Dec 2014 08:13:49 +0000 (UTC) X-Virus-Scanned: scanned by unix-experience.fr Received: from smtp.unix-experience.fr ([192.168.200.21]) by smtp.unix-experience.fr (smtp.unix-experience.fr [192.168.200.21]) (amavisd-new, port 10024) with ESMTP id jsWJEkak547u for ; Fri, 5 Dec 2014 08:13:48 +0000 (UTC) Received: from mail.unix-experience.fr (repo.unix-experience.fr [192.168.200.30]) by smtp.unix-experience.fr (Postfix) with ESMTPSA id C3E0022E91 for ; Fri, 5 Dec 2014 08:13:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=unix-experience.fr; s=uxselect; t=1417767227; bh=NU4l/XC0F3X94Hef5FwO8swW2TffzxZdz4sR2DF+zmo=; h=Date:From:Subject:To; b=WEx3j/v/KvT/+p163h/cD0BVgO3gxaJyjvnoWVn4gtffK9zVYJ4mX6wX56yvThf3t pOCuFYVIMc0m1M9X3OCN1XLwT4bNR3KuMU37soMxEBhGMCxF7r1d+2Sj7xQBt30g46 MiddjxwHP2v8jjl0Cra27jbfyog7soPo7+mdmPJQ= Mime-Version: 1.0 Date: Fri, 05 Dec 2014 08:13:47 +0000 Message-ID: X-Mailer: RainLoop/1.6.10.182 From: "=?utf-8?B?TG/Dr2MgQmxvdA==?=" Subject: High Kernel Load with nfsv4 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.18-1 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Dec 2014 08:13:59 -0000 Hi,=0A i'm trying to create a virtualisation environment based on jails. = Those jails are stored under a big ZFS pool on a FreeBSD 9.3 which export= a NFSv4 volume. This NFSv4 volume was mounted on a big hypervisor (2 Xeo= n E5v3 + 128GB memory and 8 ports (but only 1 was used at this time).=0A= =0A The problem is simple, my hypervisors runs 6 jails (used 1% cpu and 1= 0GB RAM approximatively and less than 1MB bandwidth) and works fine at st= art but the system slows down and after 2-3 days become unusable. When i = look at top command i see 80-100% on system and commands are very very sl= ow. Many process are tagged with nfs_cl*.=0A=0A I saw that there are TSO = issues with igb then i'm trying to disable it with sysctl but the situati= on wasn't solved.=0A=0A Someone has got ideas ? I can give you more infor= mations if you need.=0A=0A Thanks in advance.=0A Regards,=0A=0A Lo=C3=AFc= Blot,=0A UNIX Systems, Network and Security Engineer=0A http://www.unix-= experience.fr From owner-freebsd-fs@FreeBSD.ORG Fri Dec 5 14:15:36 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id C85ED996 for ; Fri, 5 Dec 2014 14:15:36 +0000 (UTC) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 21711138 for ; Fri, 5 Dec 2014 14:15:35 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AgwFAIm8gVSDaFve/2dsb2JhbABZg1hYBIMBw0AKhURPAoEzAQEBAQF9hAIBAQEDAQEBASArIAsFFhgCAg0ZAikBCSYGCAcEARwEiBEJDb9xlm8BAQEBAQUBAQEBAQEBARqBKI5WAQEbATMHgjE+EYE2BYowiRSDUINtNI4Hg2KEDSEwB4EFOX4BAQE X-IronPort-AV: E=Sophos;i="5.07,522,1413259200"; d="scan'208";a="175711504" Received: from muskoka.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.222]) by esa-annu.net.uoguelph.ca with ESMTP; 05 Dec 2014 09:14:26 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id E6C27B4060; Fri, 5 Dec 2014 09:14:26 -0500 (EST) Date: Fri, 5 Dec 2014 09:14:26 -0500 (EST) From: Rick Macklem To: =?utf-8?B?TG/Dr2M=?= Blot Message-ID: <581583623.5730217.1417788866930.JavaMail.root@uoguelph.ca> In-Reply-To: Subject: Re: High Kernel Load with nfsv4 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 7.2.6_GA_2926 (ZimbraWebClient - FF3.0 (Win)/7.2.6_GA_2926) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Dec 2014 14:15:37 -0000 Loic Blot wrote: > Hi, > i'm trying to create a virtualisation environment based on jails. > Those jails are stored under a big ZFS pool on a FreeBSD 9.3 which > export a NFSv4 volume. This NFSv4 volume was mounted on a big > hypervisor (2 Xeon E5v3 + 128GB memory and 8 ports (but only 1 was > used at this time). >=20 > The problem is simple, my hypervisors runs 6 jails (used 1% cpu and > 10GB RAM approximatively and less than 1MB bandwidth) and works > fine at start but the system slows down and after 2-3 days become > unusable. When i look at top command i see 80-100% on system and > commands are very very slow. Many process are tagged with nfs_cl*. >=20 To be honest, I would expect the slowness to be because of slow response from the NFSv4 server, but if you do: # ps axHl on a client when it is slow and post that, it would give us some more information on where the client side processes are sitting. If you also do something like: # nfsstat -c -w 1 and let it run for a while, that should show you how many RPCs are being done and which ones. # nfsstat -m will show you what your mount is actually using. The only mount option I can suggest trying is "rsize=3D32768,wsize=3D32768"= , since some network environments have difficulties with 64K. There are a few things you can try on the NFSv4 server side, if it appears that the clients are generating a large RPC load. - disabling the DRC cache for TCP by setting vfs.nfsd.cachetcp=3D0 - If the server is seeing a large write RPC load, then "sync=3Ddisabled" might help, although it does run a risk of data loss when the server crashes. Then there are a couple of other ZFS related things (I'm not a ZFS guy, but these have shown up on the mailing lists). - make sure your volumes are 4K aligned and ashift=3D12 (in case a drive that uses 4K sectors is pretending to be 512byte sectored) - never run over 70-80% full if write performance is an issue - use a zil on an SSD with good write performance The only NFSv4 thing I can tell you is that it is known that ZFS's algorithm for determining sequential vs random I/O fails for NFSv4 during writing and this can be a performance hit. The only workaround is to use NFSv3 mounts, since file handle affinity apparently fixes the problem and this is only done for NFSv3. rick > I saw that there are TSO issues with igb then i'm trying to disable > it with sysctl but the situation wasn't solved. >=20 > Someone has got ideas ? I can give you more informations if you > need. >=20 > Thanks in advance. > Regards, >=20 > Lo=C3=AFc Blot, > UNIX Systems, Network and Security Engineer > http://www.unix-experience.fr > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-fs@FreeBSD.ORG Sat Dec 6 01:40:09 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 78BA5715 for ; Sat, 6 Dec 2014 01:40:09 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 63680858 for ; Sat, 6 Dec 2014 01:40:09 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NG500B5V0U71E00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 05 Dec 2014 17:44:32 -0800 (PST) Message-id: <54825E70.20900@sorbs.net> Date: Sat, 06 Dec 2014 02:40:00 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: freebsd-fs@freebsd.org Subject: ZFS weird issue... X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Dec 2014 01:40:09 -0000 Here's what happened: LSI9260-16i, 16x3T SATA drives. 15 LDs each configured as single disk RAID0 last drive a hot spare. All LDs configured as a ZFS RaidZ2 pool ('sorbs') Bay/Drive 8 failed, hot spare kicked in. ZFS resilvered. (mfid15 became spare-8) On reboot mfid9-15 were re-named automatically to mfid8-14 ... ZFS didn't seems to care... Days later new drive to replace the dead drive arrived and was inserted. System refused to re-add as there was data in the cache, so rebooted and cleared the cache (as per many on web faq's) Reconfigured it to match the others. Can't do a zpool replace mfid8 because that's already in the pool... (was mfid9) can't use mfid15 because zpool reports it's not part of the config... can't use the uniq-id it received (can't find vdev) ... HELP!! :) This is what I can see currently ... Thanks, in advance. root@colossus:~ # zpool status -v pool: VirtualDisks state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM VirtualDisks ONLINE 0 0 0 zvol/sorbs/VirtualDisks ONLINE 0 0 0 errors: No known data errors pool: sorbs state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub in progress since Fri Dec 5 17:11:29 2014 2.51T scanned out of 29.9T at 89.4M/s, 89h7m to go 0 repaired, 8.40% done config: NAME STATE READ WRITE CKSUM sorbs DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 mfid0 ONLINE 0 0 0 mfid1 ONLINE 0 0 0 mfid2 ONLINE 0 0 0 mfid3 ONLINE 0 0 0 mfid4 ONLINE 0 0 0 mfid5 ONLINE 0 0 0 mfid6 ONLINE 0 0 0 mfid7 ONLINE 0 0 0 spare-8 DEGRADED 0 0 0 1702922605 UNAVAIL 0 0 0 was /dev/mfid8 mfid14 ONLINE 0 0 0 mfid8 ONLINE 0 0 0 mfid9 ONLINE 0 0 0 mfid10 ONLINE 0 0 0 mfid11 ONLINE 0 0 0 mfid12 ONLINE 0 0 0 mfid13 ONLINE 0 0 0 spares 933862663 INUSE was /dev/mfid14 errors: No known data errors root@colossus:~ # uname -a FreeBSD colossus.sorbs.net 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: Thu Sep 26 22:50:31 UTC 2013 root@bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 root@colossus:~ # sh ./lsi.sh drives Slot Number: 0 - Online, Spun Up Slot Number: 1 - Online, Spun Up Slot Number: 2 - Online, Spun Up Slot Number: 3 - Online, Spun Up Slot Number: 4 - Online, Spun Up Slot Number: 5 - Online, Spun Up Slot Number: 6 - Online, Spun Up Slot Number: 7 - Online, Spun Up Slot Number: 8 - Online, Spun Up Slot Number: 9 - Online, Spun Up Slot Number: 10 - Online, Spun Up Slot Number: 11 - Online, Spun Up Slot Number: 12 - Online, Spun Up Slot Number: 13 - Online, Spun Up Slot Number: 14 - Online, Spun Up Slot Number: 15 - Online, Spun Up root@colossus:~ # sh ./lsi.sh status Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 1 (Target Id: 1) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 2 (Target Id: 2) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 3 (Target Id: 3) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 4 (Target Id: 4) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 5 (Target Id: 5) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 6 (Target Id: 6) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 7 (Target Id: 7) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 8 (Target Id: 8) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 512 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 9 (Target Id: 9) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 10 (Target Id: 10) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 11 (Target Id: 11) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 12 (Target Id: 12) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 13 (Target Id: 13) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 14 (Target Id: 14) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 64 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Virtual Drive: 15 (Target Id: 15) Name : RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 Size : 2.728 TB Sector Size : 512 Is VD emulated : Yes Parity Size : 0 State : Optimal Strip Size : 256 KB Number Of Drives : 1 Span Depth : 1 Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Enabled Encryption Type : None Bad Blocks Exist: No Is VD Cached: No Exit Code: 0x00 ############################################### Adapter 0: Patrol Read Information: Patrol Read Mode: Auto Patrol Read Execution Delay: 672 hours Number of iterations completed: 2 Next start time: 12/13/2014, 03:00:00 Current State: Stopped Patrol Read on SSD Devices: Disabled Exit Code: 0x00 ############################################### Check Consistency on VD #0 is not in progress. Check Consistency on VD #1 is not in progress. Check Consistency on VD #2 is not in progress. Check Consistency on VD #3 is not in progress. Check Consistency on VD #4 is not in progress. Check Consistency on VD #5 is not in progress. Check Consistency on VD #6 is not in progress. Check Consistency on VD #7 is not in progress. Check Consistency on VD #8 is not in progress. Check Consistency on VD #9 is not in progress. Check Consistency on VD #10 is not in progress. Check Consistency on VD #11 is not in progress. Check Consistency on VD #12 is not in progress. Check Consistency on VD #13 is not in progress. Check Consistency on VD #14 is not in progress. Check Consistency on VD #15 is not in progress. Exit Code: 0x00 root@colossus:~ # ls -l /dev/mfi* crw-r----- 1 root operator 0x22 Dec 5 17:18 /dev/mfi0 crw-r----- 1 root operator 0x68 Dec 5 17:18 /dev/mfid0 crw-r----- 1 root operator 0x69 Dec 5 17:18 /dev/mfid1 crw-r----- 1 root operator 0x78 Dec 5 17:18 /dev/mfid10 crw-r----- 1 root operator 0x79 Dec 5 17:18 /dev/mfid11 crw-r----- 1 root operator 0x7a Dec 5 17:18 /dev/mfid12 crw-r----- 1 root operator 0x82 Dec 5 17:18 /dev/mfid13 crw-r----- 1 root operator 0x83 Dec 5 17:18 /dev/mfid14 crw-r----- 1 root operator 0x84 Dec 5 17:18 /dev/mfid15 crw-r----- 1 root operator 0x6a Dec 5 17:18 /dev/mfid2 crw-r----- 1 root operator 0x6b Dec 5 17:18 /dev/mfid3 crw-r----- 1 root operator 0x6c Dec 5 17:18 /dev/mfid4 crw-r----- 1 root operator 0x6d Dec 5 17:18 /dev/mfid5 crw-r----- 1 root operator 0x6e Dec 5 17:18 /dev/mfid6 crw-r----- 1 root operator 0x75 Dec 5 17:18 /dev/mfid7 crw-r----- 1 root operator 0x76 Dec 5 17:18 /dev/mfid8 crw-r----- 1 root operator 0x77 Dec 5 17:18 /dev/mfid9 root@colossus:~ # -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Sat Dec 6 01:54:59 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 99BB7996 for ; Sat, 6 Dec 2014 01:54:59 +0000 (UTC) Received: from mail-wi0-f173.google.com (mail-wi0-f173.google.com [209.85.212.173]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 211629F3 for ; Sat, 6 Dec 2014 01:54:58 +0000 (UTC) Received: by mail-wi0-f173.google.com with SMTP id r20so344939wiv.6 for ; Fri, 05 Dec 2014 17:54:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=qlO0QHkrpNahIOc40UhdNgDsWmwu+Kb1aue1NevUwRo=; b=jQEWq9kgyaw548HJMyNd98OfbpYRm9NplD36FnbHRaaBuqvbK6iF8xrWtINAUIX8Dt v26RLIkG0A0v0cAR/CXak/h9JCsuSPaww+mHAb98R90qF74KR4UlEED+Nl3c2LH6bh3m UooK9+W+SZSV2jz21okLaM1BclRVl/sBePKrr6CRhCzs2ZFmnq0mQ30FJwzUZd9AKsNi LpNH0HiBk0ndoUQlnAyKkXQcCBVvVb1BIYDTaTw4EmzuneJxMVqvZnLRvjpobmMIxFz2 nz28QUmPr3QSlZjnFfaa+/+SH1wLDBFbka7xfN9tn3gUg+/vIyyNPUNv+CkUxtNLlfEt 7X1Q== X-Gm-Message-State: ALoCoQkqvTNcgEMKuq+GNrPxxLpdlrH77uLHht0G3z9YI3GGcVvhd4gMvXowuBfB0NqzwqJPPVnV X-Received: by 10.180.20.6 with SMTP id j6mr7961201wie.59.1417830421848; Fri, 05 Dec 2014 17:47:01 -0800 (PST) Received: from [10.10.1.68] (82-69-141-170.dsl.in-addr.zen.co.uk. [82.69.141.170]) by mx.google.com with ESMTPSA id h2sm174300wix.5.2014.12.05.17.47.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Dec 2014 17:47:01 -0800 (PST) Message-ID: <54825F9B.6000009@multiplay.co.uk> Date: Sat, 06 Dec 2014 01:44:59 +0000 From: Steven Hartland User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: ZFS weird issue... References: <54825E70.20900@sorbs.net> In-Reply-To: <54825E70.20900@sorbs.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Dec 2014 01:54:59 -0000 Formatting for you zpool list looks odd, making it not clear if its your spare which is unavailable or one of your pool drives. If its the spare, try zpool remove. On 06/12/2014 01:40, Michelle Sullivan wrote: > Here's what happened: > > LSI9260-16i, 16x3T SATA drives. 15 LDs each configured as single disk > RAID0 last drive a hot spare. > > All LDs configured as a ZFS RaidZ2 pool ('sorbs') > > Bay/Drive 8 failed, hot spare kicked in. ZFS resilvered. (mfid15 became > spare-8) > > On reboot mfid9-15 were re-named automatically to mfid8-14 ... ZFS > didn't seems to care... > > Days later new drive to replace the dead drive arrived and was > inserted. System refused to re-add as there was data in the cache, so > rebooted and cleared the cache (as per many on web faq's) Reconfigured > it to match the others. Can't do a zpool replace mfid8 because that's > already in the pool... (was mfid9) can't use mfid15 because zpool > reports it's not part of the config... can't use the uniq-id it received > (can't find vdev) ... HELP!! :) > > This is what I can see currently ... > > Thanks, in advance. > > root@colossus:~ # zpool status -v > pool: VirtualDisks > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > VirtualDisks ONLINE 0 0 0 > zvol/sorbs/VirtualDisks ONLINE 0 0 0 > > errors: No known data errors > > pool: sorbs > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://illumos.org/msg/ZFS-8000-2Q > scan: scrub in progress since Fri Dec 5 17:11:29 2014 > 2.51T scanned out of 29.9T at 89.4M/s, 89h7m to go > 0 repaired, 8.40% done > config: > > NAME STATE READ WRITE CKSUM > sorbs DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > mfid0 ONLINE 0 0 0 > mfid1 ONLINE 0 0 0 > mfid2 ONLINE 0 0 0 > mfid3 ONLINE 0 0 0 > mfid4 ONLINE 0 0 0 > mfid5 ONLINE 0 0 0 > mfid6 ONLINE 0 0 0 > mfid7 ONLINE 0 0 0 > spare-8 DEGRADED 0 0 0 > 1702922605 UNAVAIL 0 0 0 was /dev/mfid8 > mfid14 ONLINE 0 0 0 > mfid8 ONLINE 0 0 0 > mfid9 ONLINE 0 0 0 > mfid10 ONLINE 0 0 0 > mfid11 ONLINE 0 0 0 > mfid12 ONLINE 0 0 0 > mfid13 ONLINE 0 0 0 > spares > 933862663 INUSE was /dev/mfid14 > > errors: No known data errors > root@colossus:~ # uname -a > FreeBSD colossus.sorbs.net 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: > Thu Sep 26 22:50:31 UTC 2013 > root@bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > root@colossus:~ # sh ./lsi.sh drives > Slot Number: 0 - Online, Spun Up > Slot Number: 1 - Online, Spun Up > Slot Number: 2 - Online, Spun Up > Slot Number: 3 - Online, Spun Up > Slot Number: 4 - Online, Spun Up > Slot Number: 5 - Online, Spun Up > Slot Number: 6 - Online, Spun Up > Slot Number: 7 - Online, Spun Up > Slot Number: 8 - Online, Spun Up > Slot Number: 9 - Online, Spun Up > Slot Number: 10 - Online, Spun Up > Slot Number: 11 - Online, Spun Up > Slot Number: 12 - Online, Spun Up > Slot Number: 13 - Online, Spun Up > Slot Number: 14 - Online, Spun Up > Slot Number: 15 - Online, Spun Up > root@colossus:~ # sh ./lsi.sh status > > > Adapter 0 -- Virtual Drive Information: > Virtual Drive: 0 (Target Id: 0) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 1 (Target Id: 1) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 2 (Target Id: 2) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 3 (Target Id: 3) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 4 (Target Id: 4) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 5 (Target Id: 5) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 6 (Target Id: 6) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 7 (Target Id: 7) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 8 (Target Id: 8) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 512 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Disk's Default > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 9 (Target Id: 9) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 10 (Target Id: 10) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 11 (Target Id: 11) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 12 (Target Id: 12) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 13 (Target Id: 13) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 14 (Target Id: 14) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 64 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > Virtual Drive: 15 (Target Id: 15) > Name : > RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 > Size : 2.728 TB > Sector Size : 512 > Is VD emulated : Yes > Parity Size : 0 > State : Optimal > Strip Size : 256 KB > Number Of Drives : 1 > Span Depth : 1 > Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if > Bad BBU > Default Access Policy: Read/Write > Current Access Policy: Read/Write > Disk Cache Policy : Enabled > Encryption Type : None > Bad Blocks Exist: No > Is VD Cached: No > > > > Exit Code: 0x00 > ############################################### > > Adapter 0: Patrol Read Information: > > Patrol Read Mode: Auto > Patrol Read Execution Delay: 672 hours > Number of iterations completed: 2 > Next start time: 12/13/2014, 03:00:00 > Current State: Stopped > Patrol Read on SSD Devices: Disabled > > Exit Code: 0x00 > ############################################### > > Check Consistency on VD #0 is not in progress. > Check Consistency on VD #1 is not in progress. > Check Consistency on VD #2 is not in progress. > Check Consistency on VD #3 is not in progress. > Check Consistency on VD #4 is not in progress. > Check Consistency on VD #5 is not in progress. > Check Consistency on VD #6 is not in progress. > Check Consistency on VD #7 is not in progress. > Check Consistency on VD #8 is not in progress. > Check Consistency on VD #9 is not in progress. > Check Consistency on VD #10 is not in progress. > Check Consistency on VD #11 is not in progress. > Check Consistency on VD #12 is not in progress. > Check Consistency on VD #13 is not in progress. > Check Consistency on VD #14 is not in progress. > Check Consistency on VD #15 is not in progress. > > Exit Code: 0x00 > root@colossus:~ # ls -l /dev/mfi* > crw-r----- 1 root operator 0x22 Dec 5 17:18 /dev/mfi0 > crw-r----- 1 root operator 0x68 Dec 5 17:18 /dev/mfid0 > crw-r----- 1 root operator 0x69 Dec 5 17:18 /dev/mfid1 > crw-r----- 1 root operator 0x78 Dec 5 17:18 /dev/mfid10 > crw-r----- 1 root operator 0x79 Dec 5 17:18 /dev/mfid11 > crw-r----- 1 root operator 0x7a Dec 5 17:18 /dev/mfid12 > crw-r----- 1 root operator 0x82 Dec 5 17:18 /dev/mfid13 > crw-r----- 1 root operator 0x83 Dec 5 17:18 /dev/mfid14 > crw-r----- 1 root operator 0x84 Dec 5 17:18 /dev/mfid15 > crw-r----- 1 root operator 0x6a Dec 5 17:18 /dev/mfid2 > crw-r----- 1 root operator 0x6b Dec 5 17:18 /dev/mfid3 > crw-r----- 1 root operator 0x6c Dec 5 17:18 /dev/mfid4 > crw-r----- 1 root operator 0x6d Dec 5 17:18 /dev/mfid5 > crw-r----- 1 root operator 0x6e Dec 5 17:18 /dev/mfid6 > crw-r----- 1 root operator 0x75 Dec 5 17:18 /dev/mfid7 > crw-r----- 1 root operator 0x76 Dec 5 17:18 /dev/mfid8 > crw-r----- 1 root operator 0x77 Dec 5 17:18 /dev/mfid9 > root@colossus:~ # > From owner-freebsd-fs@FreeBSD.ORG Sat Dec 6 02:21:34 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 5702AD05 for ; Sat, 6 Dec 2014 02:21:34 +0000 (UTC) Received: from hades.sorbs.net (hades.sorbs.net [67.231.146.201]) by mx1.freebsd.org (Postfix) with ESMTP id 434FECA0 for ; Sat, 6 Dec 2014 02:21:33 +0000 (UTC) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from isux.com (firewall.isux.com [213.165.190.213]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0NG500B6B2RC1E00@hades.sorbs.net> for freebsd-fs@freebsd.org; Fri, 05 Dec 2014 18:26:02 -0800 (PST) Message-id: <5482682A.7090107@sorbs.net> Date: Sat, 06 Dec 2014 03:21:30 +0100 From: Michelle Sullivan User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24) Gecko/20100301 SeaMonkey/1.1.19 To: Steven Hartland Subject: Re: ZFS weird issue... References: <54825E70.20900@sorbs.net> <54825F9B.6000009@multiplay.co.uk> In-reply-to: <54825F9B.6000009@multiplay.co.uk> Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Dec 2014 02:21:34 -0000 Steven Hartland wrote: > Formatting for you zpool list looks odd, making it not clear if its > your spare which is unavailable or one of your pool drives. > > If its the spare, try zpool remove. root@colossus:~ # zpool remove sorbs mfid14 cannot remove mfid14: only inactive hot spares, cache, top-level, or log devices can be removed The 'UNAVAIL' is the new drive. Michelle > > On 06/12/2014 01:40, Michelle Sullivan wrote: >> Here's what happened: >> >> LSI9260-16i, 16x3T SATA drives. 15 LDs each configured as single disk >> RAID0 last drive a hot spare. >> >> All LDs configured as a ZFS RaidZ2 pool ('sorbs') >> >> Bay/Drive 8 failed, hot spare kicked in. ZFS resilvered. (mfid15 became >> spare-8) >> >> On reboot mfid9-15 were re-named automatically to mfid8-14 ... ZFS >> didn't seems to care... >> >> Days later new drive to replace the dead drive arrived and was >> inserted. System refused to re-add as there was data in the cache, so >> rebooted and cleared the cache (as per many on web faq's) Reconfigured >> it to match the others. Can't do a zpool replace mfid8 because that's >> already in the pool... (was mfid9) can't use mfid15 because zpool >> reports it's not part of the config... can't use the uniq-id it received >> (can't find vdev) ... HELP!! :) >> >> This is what I can see currently ... >> >> Thanks, in advance. >> >> root@colossus:~ # zpool status -v >> pool: VirtualDisks >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> VirtualDisks ONLINE 0 0 0 >> zvol/sorbs/VirtualDisks ONLINE 0 0 0 >> >> errors: No known data errors >> >> pool: sorbs >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient replicas >> exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using 'zpool online'. >> see: http://illumos.org/msg/ZFS-8000-2Q >> scan: scrub in progress since Fri Dec 5 17:11:29 2014 >> 2.51T scanned out of 29.9T at 89.4M/s, 89h7m to go >> 0 repaired, 8.40% done >> config: >> >> NAME STATE READ WRITE CKSUM >> sorbs DEGRADED 0 0 0 >> raidz2-0 DEGRADED 0 0 0 >> mfid0 ONLINE 0 0 0 >> mfid1 ONLINE 0 0 0 >> mfid2 ONLINE 0 0 0 >> mfid3 ONLINE 0 0 0 >> mfid4 ONLINE 0 0 0 >> mfid5 ONLINE 0 0 0 >> mfid6 ONLINE 0 0 0 >> mfid7 ONLINE 0 0 0 >> spare-8 DEGRADED 0 0 0 >> 1702922605 UNAVAIL 0 0 0 was /dev/mfid8 >> mfid14 ONLINE 0 0 0 >> mfid8 ONLINE 0 0 0 >> mfid9 ONLINE 0 0 0 >> mfid10 ONLINE 0 0 0 >> mfid11 ONLINE 0 0 0 >> mfid12 ONLINE 0 0 0 >> mfid13 ONLINE 0 0 0 >> spares >> 933862663 INUSE was /dev/mfid14 >> >> errors: No known data errors >> root@colossus:~ # uname -a >> FreeBSD colossus.sorbs.net 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: >> Thu Sep 26 22:50:31 UTC 2013 >> root@bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 >> root@colossus:~ # sh ./lsi.sh drives >> Slot Number: 0 - Online, Spun Up >> Slot Number: 1 - Online, Spun Up >> Slot Number: 2 - Online, Spun Up >> Slot Number: 3 - Online, Spun Up >> Slot Number: 4 - Online, Spun Up >> Slot Number: 5 - Online, Spun Up >> Slot Number: 6 - Online, Spun Up >> Slot Number: 7 - Online, Spun Up >> Slot Number: 8 - Online, Spun Up >> Slot Number: 9 - Online, Spun Up >> Slot Number: 10 - Online, Spun Up >> Slot Number: 11 - Online, Spun Up >> Slot Number: 12 - Online, Spun Up >> Slot Number: 13 - Online, Spun Up >> Slot Number: 14 - Online, Spun Up >> Slot Number: 15 - Online, Spun Up >> root@colossus:~ # sh ./lsi.sh status >> >> Adapter 0 -- Virtual Drive Information: >> Virtual Drive: 0 (Target Id: 0) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 1 (Target Id: 1) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 2 (Target Id: 2) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 3 (Target Id: 3) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 4 (Target Id: 4) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 5 (Target Id: 5) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 6 (Target Id: 6) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 7 (Target Id: 7) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 8 (Target Id: 8) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 512 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Disk's Default >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 9 (Target Id: 9) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 10 (Target Id: 10) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 11 (Target Id: 11) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 12 (Target Id: 12) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 13 (Target Id: 13) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 14 (Target Id: 14) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 64 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> Virtual Drive: 15 (Target Id: 15) >> Name : >> RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 >> Size : 2.728 TB >> Sector Size : 512 >> Is VD emulated : Yes >> Parity Size : 0 >> State : Optimal >> Strip Size : 256 KB >> Number Of Drives : 1 >> Span Depth : 1 >> Default Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if >> Bad BBU >> Default Access Policy: Read/Write >> Current Access Policy: Read/Write >> Disk Cache Policy : Enabled >> Encryption Type : None >> Bad Blocks Exist: No >> Is VD Cached: No >> >> >> >> Exit Code: 0x00 >> ############################################### >> Adapter 0: Patrol Read Information: >> >> Patrol Read Mode: Auto >> Patrol Read Execution Delay: 672 hours >> Number of iterations completed: 2 >> Next start time: 12/13/2014, 03:00:00 >> Current State: Stopped >> Patrol Read on SSD Devices: Disabled >> >> Exit Code: 0x00 >> ############################################### >> Check Consistency on VD #0 is >> not in progress. >> Check Consistency on VD #1 is not in progress. >> Check Consistency on VD #2 is not in progress. >> Check Consistency on VD #3 is not in progress. >> Check Consistency on VD #4 is not in progress. >> Check Consistency on VD #5 is not in progress. >> Check Consistency on VD #6 is not in progress. >> Check Consistency on VD #7 is not in progress. >> Check Consistency on VD #8 is not in progress. >> Check Consistency on VD #9 is not in progress. >> Check Consistency on VD #10 is not in progress. >> Check Consistency on VD #11 is not in progress. >> Check Consistency on VD #12 is not in progress. >> Check Consistency on VD #13 is not in progress. >> Check Consistency on VD #14 is not in progress. >> Check Consistency on VD #15 is not in progress. >> >> Exit Code: 0x00 >> root@colossus:~ # ls -l /dev/mfi* >> crw-r----- 1 root operator 0x22 Dec 5 17:18 /dev/mfi0 >> crw-r----- 1 root operator 0x68 Dec 5 17:18 /dev/mfid0 >> crw-r----- 1 root operator 0x69 Dec 5 17:18 /dev/mfid1 >> crw-r----- 1 root operator 0x78 Dec 5 17:18 /dev/mfid10 >> crw-r----- 1 root operator 0x79 Dec 5 17:18 /dev/mfid11 >> crw-r----- 1 root operator 0x7a Dec 5 17:18 /dev/mfid12 >> crw-r----- 1 root operator 0x82 Dec 5 17:18 /dev/mfid13 >> crw-r----- 1 root operator 0x83 Dec 5 17:18 /dev/mfid14 >> crw-r----- 1 root operator 0x84 Dec 5 17:18 /dev/mfid15 >> crw-r----- 1 root operator 0x6a Dec 5 17:18 /dev/mfid2 >> crw-r----- 1 root operator 0x6b Dec 5 17:18 /dev/mfid3 >> crw-r----- 1 root operator 0x6c Dec 5 17:18 /dev/mfid4 >> crw-r----- 1 root operator 0x6d Dec 5 17:18 /dev/mfid5 >> crw-r----- 1 root operator 0x6e Dec 5 17:18 /dev/mfid6 >> crw-r----- 1 root operator 0x75 Dec 5 17:18 /dev/mfid7 >> crw-r----- 1 root operator 0x76 Dec 5 17:18 /dev/mfid8 >> crw-r----- 1 root operator 0x77 Dec 5 17:18 /dev/mfid9 >> root@colossus:~ # >> > > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-fs@FreeBSD.ORG Sat Dec 6 03:15:11 2014 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 632C08A3 for ; Sat, 6 Dec 2014 03:15:11 +0000 (UTC) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4AD4B2A1 for ; Sat, 6 Dec 2014 03:15:11 +0000 (UTC) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.14.9/8.14.9) with ESMTP id sB63FBca095245 for ; Sat, 6 Dec 2014 03:15:11 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-fs@FreeBSD.org Subject: [Bug 194938] [10.1-RC4-p1][panic] panic by setting sysctl vfs.zfs.vdev.aggregation_limit (with backtrace) Date: Sat, 06 Dec 2014 03:15:10 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RC2 X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: rodrigc@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-fs@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Dec 2014 03:15:11 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194938 Craig Rodrigues changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rodrigc@FreeBSD.org Assignee|freebsd-bugs@FreeBSD.org |freebsd-fs@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug.