From owner-freebsd-stable@freebsd.org Sun May 5 03:06:57 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7CA0C15A0C82 for ; Sun, 5 May 2019 03:06:57 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id B14F883810 for ; Sun, 5 May 2019 03:06:53 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR000AHDHAJX100@hades.sorbs.net> for freebsd-stable@freebsd.org; Sat, 04 May 2019 20:20:46 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Sun, 05 May 2019 13:06:43 +1000 Cc: Pete French , FreeBSD Stable Content-transfer-encoding: quoted-printable Message-id: <28BE9C83-FA53-4856-9176-52A6CB113641@sorbs.net> References: <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> <62803130-9C40-4A98-B5A4-A2DFAC0FAD65@sorbs.net> <20190503125118.GA11226@neutralgood.org> <2A7B5457-371A-4014-8C1E-972BA2FD10DF@sorbs.net> <7b9ce013-e50c-7cfc-f5c1-c829855f8ee2@ingresso.co.uk> <0D6CF718-2D40-4457-ADAB-CC17B52124AA@sorbs.net> To: Chris X-Rspamd-Queue-Id: B14F883810 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.56 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.98)[-0.982,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[battlestar.sorbs.net,anaconda.sorbs.net,ninja.sorbs.net,catapilla.sorbs.net,scorpion.sorbs.net,desperado.sorbs.net]; NEURAL_HAM_SHORT(-0.41)[-0.410,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.40)[ip: (-1.02), ipnet: 72.12.192.0/19(-0.53), asn: 11114(-0.41), country: US(-0.06)]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2019 03:06:57 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 05 May 2019, at 05:36, Chris wrote: >=20 > Sorry t clarify, Michelle I do believe your tail of events, just I > meant that it reads like a tale as its so unusual. There are multiple separate instances of problems over 8 years, but the fina= l killer was without a doubt a catalog of disasters.. =20 >=20 > I also agree that there probably at this point of time should be more > zfs tools written for the few situations that do happen when things > get broken. This is my thought..though I am in agreement with the devs that a ZFS =E2=80= =9Cfsck=E2=80=9D is not the way to go. I think we (anyone using zfs) needs t= o have a =E2=80=9Csalvage what data you can to elsewhere=E2=80=9D type tool.= .. I am yet to explore the one written under windows that a dev sent me to s= ee if that works (only because of the logistics of getting a windows 7 image= on a USB drive that I can put into the server for recovery attempting.). If= it works a version for command line would be the real answer to my prayers (= and others I imagine.) >=20 > Although I still standby my opinion I consider ZFS a huge amount more > robust than UFS, UFS always felt like I only had to sneeze the wrong > way and I would get issues. There was even one occasion simply > installing the OS on its defaults, gave me corrupted data on UFS (9.0 > release had nasty UFS journalling bug which corrupted data without any > power cuts etc.). Which I find interesting in itself as I have a machine running 9.3 which sta= rted life as a 5.x (which tells you how old it is) and it=E2=80=99s still ru= nning on the same *compaq* raid5 with UFS on it... with the original drives,= with a hot spare that still hasn=E2=80=99t been used... and the only thing d= one to it hardware wise is I replaced the motherboard 12 months ago as it ju= st stopped POSTing and couldn=E2=80=99t work out what failed...never had a d= rive corruption barring the fscks following hard power issues... it went wit= h me from Brisbane to Canberra, back to Brisbane by back of car, then to Mal= ta, back from Malta and is still downstairs... it=E2=80=99s my primary MX s= erver and primary resolver for home and handles around 5k email per day.. >=20 > In future I suggest you use mirror if the data matters. I know it > costs more in capacity for redundancy but in todays era of large > drives its the only real sensible option. Now it is and it was on my list of things to start just before this happened= ... in fact I have already got 4*6T drives to copy everything off ready to r= ebuild the entire pool with 16*6T drives in a raid 10 like config... the pow= er/corruption beat me to it. >=20 > On the drive failures you have clearly been quite unlucky, and the > other stuff is unusual. >=20 Drive failure wise, I think my =E2=80=9Cluck=E2=80=9D has been normal... rem= ember this is an 8 year old system drives are only certified for 3 years... g= etting 5 years when 24x7 is not bad (especially considering its workload). T= he problem has always been how zfs copes, and this has been getting better o= vertime, but this metadata corruption is something I have seen similar befor= e and that is where I have a problem with it... (especially when zfs devs st= art making statements about how the system is always right and everything el= se is because of hardware and if you=E2=80=99re not running enterprise hardw= are you deserve what you get... then advocating installing it on laptops etc= ..!) > Best of luck Thanks, I=E2=80=99ll need it as my changes to the code did not allow the mou= nt though it did allow zdb to parse the drive... guess what I thought was th= ere in zdb is not the same code in the zfs module. Michelle >=20 >> On Sat, 4 May 2019 at 09:54, Pete French wrot= e: >>=20 >>=20 >>=20 >>> On 04/05/2019 01:05, Michelle Sullivan wrote: >>> New batteries are only $19 on eBay for most battery types... >>=20 >> Indeed, my problem is actual physical access to the machine, which I >> havent seen in ten years :-) I even have a relacement server sitting >> behind my desk which we never quite got around to installing. I think >> the next move it makes will be to the cloud though, so am not too worried= . >>=20 >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"= > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Sun May 5 08:18:09 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0B89515820A4 for ; Sun, 5 May 2019 08:18:09 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [31.24.6.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 853B98AF6A for ; Sun, 5 May 2019 08:18:07 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from [2001:470:6cc4:1:225:ff:fe46:71cf] (helo=foula.local) by constantine.ingresso.co.uk with esmtpsa (TLSv1.3:TLS_AES_128_GCM_SHA256:128) (Exim 4.92 (FreeBSD)) (envelope-from ) id 1hNCLQ-000LeD-Dy for freebsd-stable@freebsd.org; Sun, 05 May 2019 08:18:00 +0000 Subject: Re: ZFS... To: freebsd-stable@freebsd.org References: <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> <62803130-9C40-4A98-B5A4-A2DFAC0FAD65@sorbs.net> <20190503125118.GA11226@neutralgood.org> <2A7B5457-371A-4014-8C1E-972BA2FD10DF@sorbs.net> <7b9ce013-e50c-7cfc-f5c1-c829855f8ee2@ingresso.co.uk> <0D6CF718-2D40-4457-ADAB-CC17B52124AA@sorbs.net> <28BE9C83-FA53-4856-9176-52A6CB113641@sorbs.net> From: Pete French Message-ID: Date: Sun, 5 May 2019 09:18:03 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:67.0) Gecko/20100101 Thunderbird/67.0 MIME-Version: 1.0 In-Reply-To: <28BE9C83-FA53-4856-9176-52A6CB113641@sorbs.net> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 853B98AF6A X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dmarc=pass (policy=none) header.from=ingresso.co.uk; spf=pass (mx1.freebsd.org: domain of petefrench@ingresso.co.uk designates 31.24.6.74 as permitted sender) smtp.mailfrom=petefrench@ingresso.co.uk X-Spamd-Result: default: False [-6.64 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:31.24.6.74]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; RCPT_COUNT_ONE(0.00)[1]; IP_SCORE(-3.31)[ip: (-9.66), ipnet: 31.24.0.0/21(-4.83), asn: 16082(-1.99), country: GB(-0.09)]; MX_GOOD(-0.01)[us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mi mecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com]; DMARC_POLICY_ALLOW(-0.50)[ingresso.co.uk,none]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_HAM_SHORT(-0.97)[-0.969,0]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16082, ipnet:31.24.0.0/21, country:GB]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2019 08:18:09 -0000 On 05/05/2019 04:06, Michelle Sullivan wrote: > Which I find interesting in itself as I have a machine running 9.3 which started life as a 5.x (which tells you how old it is) and it’s still running on the same *compaq* raid5 with UFS on it... with the original drives, with a hot spare that still hasn’t been used... and the only thing done to it hardware wise is I replaced the motherboard 12 months ago as it just stopped POSTing and couldn’t work out what failed...never had a drive corruption barring the fscks following hard power issues... it went with me from Brisbane to Canberra, back to Brisbane by back of car, then to Malta, back from Malta and is still downstairs... it’s my primary MX server and primary resolver for home and handles around 5k email per day.. Heh, Ok, thats cool :-) Some of my old HP RAID systems started life as Compaq ones - you never installed the firmware update which simply changed the name it printed on boot then ? My personal server with the dead battery has been going at least 12 years. Had to replace the drives (and HP SAS drives are still silly prices sadly), one of the onboard ether ports has died, but otherwise still going strong. Not had the long distance travel of yours though. I did ship some machines to Jersey once, but boat, and all the drives which had been on the crossing failed one by one within a few months of arriving. Makes me wonder how rough the sea that crossing actually was. Those were in a Compaq RAID pedestal too. After that I shipped machines, but took the drives in my hand luggage on planes always. Actiually, not sure they would let me do that these days, havent triued in years. -pete. From owner-freebsd-stable@freebsd.org Sun May 5 10:50:11 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B54551585E95 for ; Sun, 5 May 2019 10:50:11 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 23C098EAFA for ; Sun, 5 May 2019 10:50:11 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: by mailman.ysv.freebsd.org (Postfix) id DB9541585E94; Sun, 5 May 2019 10:50:10 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B76EA1585E93 for ; Sun, 5 May 2019 10:50:10 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: from forward100o.mail.yandex.net (forward100o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::600]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7C2A38EAF9 for ; Sun, 5 May 2019 10:50:08 +0000 (UTC) (envelope-from bu7cher@yandex.ru) Received: from mxback14o.mail.yandex.net (mxback14o.mail.yandex.net [IPv6:2a02:6b8:0:1a2d::65]) by forward100o.mail.yandex.net (Yandex) with ESMTP id BA3424AC04D0; Sun, 5 May 2019 13:50:03 +0300 (MSK) Received: from smtp4p.mail.yandex.net (smtp4p.mail.yandex.net [2a02:6b8:0:1402::15:6]) by mxback14o.mail.yandex.net (nwsmtp/Yandex) with ESMTP id QSbOMZL6pS-o3Sm6eRO; Sun, 05 May 2019 13:50:03 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1557053403; bh=4KnnDbmXgRGbhTQphoD6FDEyznoqTx1/MsbYVHc7vrY=; h=In-Reply-To:From:Date:References:To:Subject:Message-ID; b=sAdhO6+bZlhfSbL5SiHCMBD9dYpOC9XNeyuRrek3djICji2v3zxkLGMzggI1ODp/2 qOFmBqaB0J44/tdXbQ/bB1DeTema3Lt/ch4llbkhgZ6RBrXojMRZ4x8byYaZSspqN8 m8gENX4v4574o38O8L4n3DR7FVUd604so7wvuGqI= Received: by smtp4p.mail.yandex.net (nwsmtp/Yandex) with ESMTPSA id cw06dWfWYU-o3MWU8n0; Sun, 05 May 2019 13:50:03 +0300 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client certificate not present) Subject: Re: route based ipsec To: KOT MATPOCKuH , stable@freebsd.org References: From: "Andrey V. Elsukov" Openpgp: id=E6591E1B41DA1516F0C9BC0001C5EA0410C8A17A Autocrypt: addr=bu7cher@yandex.ru; prefer-encrypt=mutual; keydata= mQENBEwBF1kBCADB9sXFhBEUy8qQ4X63Y8eBatYMHGEFWN9ypS5lI3RE6qQW2EYbxNk7qUC5 21YIIS1mMFVBEfvR7J9uc7yaYgFCEb6Sce1RSO4ULN2mRKGHP3/Sl0ijZEjWHV91hY1YTHEF ZW/0GYinDf56sYpDDehaBF5wkWIo1+QK5nmj3vl0DIDCMNd7QEiWpyLVwECgLX2eOAXByT8B bCqVhJGcG6iFP7/B9Ll6uX5gb8thM9LM+ibwErDBVDGiOgvfxqidab7fdkh893IBCXa82H9N CNwnEtcgzh+BSKK5BgvPohFMgRwjti37TSxwLu63QejRGbZWSz3OK3jMOoF63tCgn7FvABEB AAG0JUFuZHJleSBWLiBFbHN1a292IDxidTdjaGVyQHlhbmRleC5ydT6JATgEEwECACIFAkwB F1kCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEAHF6gQQyKF6qmYIAI6ekfm1VA4T vqankI1ISE6ku4jV7UlpIQlEbE7/8n3Zd6teJ+pGOQhN5qk8QE7utdPdbktAzi+x7LIJVzUw 4TywZLXGrkP7VKYkfg6oyCGyzITghefQeJtr2TN4hYCkzPWpylkue8MtmqfZv/6royqwTbN+ +E09FQNvTgRUYJYTeQ1qOsxNRycwvw3dr2rOfuxShbzaHBB1pBIjGrMg8fC5pd65ACH5zuFV A0CoTNGMDrEZSfBkTW604UUHFFXeCoC3dwDZRKOWJ3GmMXns65Ai5YkA63BSHEE1Qle3VBhd cG1w0CB5FBV3pB27UVnf0jEbysrDqW4qN7XMRFSWNAy5AQ0ETAEXWQEIAJ2p6l9LBoqdH/0J PEFDY2t2gTvAuzz+8zs3R03dFuHcNbOwjvWCG0aOmVpAzkRa8egn5JB4sZaFUtKPYJEQ1Iu+ LUBwgvtXf4vWpzC67zs2dDuiW4LamH5p6xkTD61aHR7mCB3bg2TUjrDWn2Jt44cvoYxj3dz4 S49U1rc9ZPgD5axCNv45j72tggWlZvpefThP7xT1OlNTUqye2gAwQravXpZkl5JG4eOqJVIU X316iE3qso0iXRUtO7OseBf0PiVmk+wCahdreHOeOxK5jMhYkPKVn7z1sZiB7W2H2TojbmcK HZC22sz7Z/H36Lhg1+/RCnGzdEcjGc8oFHXHCxUAEQEAAYkBHwQYAQIACQUCTAEXWQIbDAAK CRABxeoEEMihegkYCAC3ivGYNe2taNm/4Nx5GPdzuaAJGKWksV+w9mo7dQvU+NmI2az5w8vw 98OmX7G0OV9snxMW+6cyNqBrVFTu33VVNzz9pnqNCHxGvj5dL5ltP160JV2zw2bUwJBYsgYQ WfyJJIM7l3gv5ZS3DGqaGIm9gOK1ANxfrR5PgPzvI9VxDhlr2juEVMZYAqPLEJe+SSxbwLoz BcFCNdDAyXcaAzXsx/E02YWm1hIWNRxanAe7Vlg7OL+gvLpdtrYCMg28PNqKNyrQ87LQ49O9 50IIZDOtNFeR0FGucjcLPdS9PiEqCoH7/waJxWp6ydJ+g4OYRBYNM0EmMgy1N85JJrV1mi5i Message-ID: Date: Sun, 5 May 2019 13:48:46 +0300 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="Lwl1cHaXKpU5FHW4k8gJwXgPIehbaWzcr" X-Rspamd-Queue-Id: 7C2A38EAF9 X-Spamd-Bar: ------- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yandex.ru header.s=mail header.b=sAdhO6+b; dmarc=pass (policy=none) header.from=yandex.ru; spf=pass (mx1.freebsd.org: domain of bu7cher@yandex.ru designates 2a02:6b8:0:1a2d::600 as permitted sender) smtp.mailfrom=bu7cher@yandex.ru X-Spamd-Result: default: False [-7.92 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip6:2a02:6b8:0:1a2d::/64]; FREEMAIL_FROM(0.00)[yandex.ru]; HAS_ATTACHMENT(0.00)[]; RCVD_COUNT_THREE(0.00)[4]; MX_GOOD(-0.01)[cached: mx.yandex.ru]; DKIM_TRACE(0.00)[yandex.ru:+]; RCPT_COUNT_TWO(0.00)[2]; NEURAL_HAM_SHORT(-0.98)[-0.979,0]; DMARC_POLICY_ALLOW(-0.50)[yandex.ru,none]; SIGNED_PGP(-2.00)[]; FREEMAIL_TO(0.00)[gmail.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:+]; RCVD_TLS_LAST(0.00)[]; RCVD_IN_DNSWL_LOW(-0.10)[0.0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.d.2.a.1.0.0.0.0.8.b.6.0.2.0.a.2.list.dnswl.org : 127.0.5.1]; ASN(0.00)[asn:13238, ipnet:2a02:6b8::/32, country:RU]; MID_RHS_MATCH_FROM(0.00)[]; DWL_DNSWL_NONE(0.00)[yandex.ru.dwl.dnswl.org : 127.0.5.0]; ARC_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[yandex.ru]; R_DKIM_ALLOW(-0.20)[yandex.ru:s=mail]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/mixed,text/plain]; IP_SCORE(-1.73)[ipnet: 2a02:6b8::/32(-4.81), asn: 13238(-3.84), country: RU(0.01)]; TO_MATCH_ENVRCPT_SOME(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2019 10:50:11 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --Lwl1cHaXKpU5FHW4k8gJwXgPIehbaWzcr Content-Type: multipart/mixed; boundary="u5advhn0aNOC1BJgX1eDdbOEsDkpaD9AX"; protected-headers="v1" From: "Andrey V. Elsukov" To: KOT MATPOCKuH , stable@freebsd.org Message-ID: Subject: Re: route based ipsec References: In-Reply-To: --u5advhn0aNOC1BJgX1eDdbOEsDkpaD9AX Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02.05.2019 23:16, KOT MATPOCKuH wrote: > I'm trying to make a full mesh vpn using route based ipsec between four= > hosts under FreeBSD 12. > I'm used racoon from security/ipsec-tools (as it recommended in > https://www.freebsd.org/doc/handbook/ipsec.html) > Result looks work, but I got some problems: > 0.The ipsec-tools port currently does not have a maintainer (C) portmas= ter > ... Does this solution really supported? Or I should switch to use anot= her > IKE daemon? I think it is unmaintained in upstream too. > 1. racoon was 3 times crashed with core dump (2 times on one host, 1 ti= mes > on another host): > (gdb) bt > #0 0x000000000024417f in isakmp_info_recv () > #1 0x00000000002345f4 in isakmp_main () > #2 0x00000000002307d0 in isakmp_handler () > #3 0x000000000022f10d in session () > #4 0x000000000022e62a in main () >=20 > 2. racoon generated 2 SA for each traffic direction (from hostA to host= B). > IMHO one SA for one each traffic direction should be enough. Probably you have something wrong in your configuration. Note, that if_ipsec(4) interfaces has own security policies and you need to check that racoon doesn't create additional policies. Also, if_ipsec(4) uses "reqid" parameter to distinguish IPsec SAs between interfaces. I made a patch to add special parameter for racoon, so it is possible to use several if_ipsec(4) interfaces. I think it should be in port. https://lists.freebsd.org/pipermail/freebsd-net/2018-May/050509.html Also you can use strongswan, we use it for some time and have no problems= =2E > 3. ping and TCP taffic works over ipsec tunnels, but, for example, =2E.. > I think it's may be result of two SA's for each direction, and some tra= ffic > can be passed to kernel using second SA, but can't be associated with > proper ipsecX interface. Yes. Each SA has its SPI, that is used to encrypt/decrypt packets. if_ipsec(4) interface uses security policies with specific reqid, IKEd should install SAs with the same reqid, then packets that are going trough if_ipsec(4) interface can be correctly encrypted and decrypted. --=20 WBR, Andrey V. Elsukov --u5advhn0aNOC1BJgX1eDdbOEsDkpaD9AX-- --Lwl1cHaXKpU5FHW4k8gJwXgPIehbaWzcr Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/ iQEzBAEBCAAdFiEE5lkeG0HaFRbwybwAAcXqBBDIoXoFAlzOv5QACgkQAcXqBBDI oXq3SAf/TarQ4eZ6F3deSdjE/Q5CELThB8AwaTPITLQdm/zcV3O8QhT1ek+74N3D tuvxszVFzaEwh8RrwYtdk/jK9wjE72N0xY9r8qs6r+PCn7/kNz9wHR0RZvvvZaj1 2mqD/dZ60Qz53sQn/n6uQOuzwDj/w92G+TOuWDGnV9KNzPtpt4YtFVpN12BGI6Z9 wQy9go+IefjF5Wi4ByV2n/gdB7+RRy7NKutA3A8e4Dj8rZo7kuOLtF3TCCy0LhAq 4zcrcMBDA8cYA+gEiYEXKPLfSTloZfW/Lzv5cqwSX9GMaUXM00si+50RnKqO4XVv SBtqvCT9z3Jdo8B54kgsDiAqKIcxYA== =+c8i -----END PGP SIGNATURE----- --Lwl1cHaXKpU5FHW4k8gJwXgPIehbaWzcr-- From owner-freebsd-stable@freebsd.org Sun May 5 13:37:50 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5635C158B61F for ; Sun, 5 May 2019 13:37:50 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 120589383A for ; Sun, 5 May 2019 13:37:48 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=UTF-8; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR100AF3AI7X120@hades.sorbs.net> for freebsd-stable@freebsd.org; Sun, 05 May 2019 06:51:45 -0700 (PDT) Subject: Re: ZFS... To: Pete French , freebsd-stable@freebsd.org References: <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> <62803130-9C40-4A98-B5A4-A2DFAC0FAD65@sorbs.net> <20190503125118.GA11226@neutralgood.org> <2A7B5457-371A-4014-8C1E-972BA2FD10DF@sorbs.net> <7b9ce013-e50c-7cfc-f5c1-c829855f8ee2@ingresso.co.uk> <0D6CF718-2D40-4457-ADAB-CC17B52124AA@sorbs.net> <28BE9C83-FA53-4856-9176-52A6CB113641@sorbs.net> From: Michelle Sullivan Message-id: <977d19fe-d60e-1bf6-cf4f-cabe816449bc@sorbs.net> Date: Sun, 05 May 2019 23:37:44 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: X-Rspamd-Queue-Id: 120589383A X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.63 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.99)[-0.987,0]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.40)[ip: (-1.00), ipnet: 72.12.192.0/19(-0.51), asn: 11114(-0.40), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.98)[-0.984,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; CTE_CASE(0.50)[]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2019 13:37:50 -0000 Pete French wrote: > > > On 05/05/2019 04:06, Michelle Sullivan wrote: > >> Which I find interesting in itself as I have a machine running 9.3 >> which started life as a 5.x (which tells you how old it is) and it’s >> still running on the same *compaq* raid5 with UFS on it... with the >> original drives, with a hot spare that still hasn’t been used... and >> the only thing done to it hardware wise is I replaced the motherboard >> 12 months ago as it just stopped POSTing and couldn’t work out what >> failed...never had a drive corruption barring the fscks following >> hard power issues... it went with me from Brisbane to Canberra, back >> to Brisbane by back of car, then to Malta, back from Malta and is >> still downstairs... it’s my primary MX server and primary resolver >> for home and handles around 5k email per day.. > > Heh, Ok, thats cool :-) Some of my old HP RAID systems started life as > Compaq ones - you never installed the firmware update which simply > changed the name it printed on boot then ? Umm, does it change the big startup "COMPAQ" graphic? If not then dunno... if it does... nope :) > > My personal server with the dead battery has been going at least 12 > years. Had to replace the drives (and HP SAS drives are still silly > prices sadly), one of the onboard ether ports has died, but otherwise > still going strong. IIRC i've put 3 new clock batteries in over the years... and it's all SCSI... 18GB (no SAS on the machine) :P ... (in fact, 32bit and not capable of driving a SAS card - unless you can get PCI or ISA SAS cards :P ) > > Not had the long distance travel of yours though. I did ship some > machines to Jersey once, but boat, and all the drives which had been > on the crossing failed one by one within a few months of arriving. > Makes me wonder how rough the sea that crossing actually was. The biggest issue I had was the idiots who unloaded the container at Customs.. not saying much except they loaded it backwards (literally) ... a 3KVA ups (with batteries in it) was put at the top and by the time it got from Botany to me it had made its way to the bottom... > Those were in a Compaq RAID pedestal too. After that I shipped > machines, but took the drives in my hand luggage on planes always. > Actiually, not sure they would let me do that these days, havent > triued in years. > Good question. -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Sun May 5 21:46:23 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DA63E1596721 for ; Sun, 5 May 2019 21:46:22 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7F39C72A96; Sun, 5 May 2019 21:46:22 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from mousie.catspoiler.org (unknown [76.212.85.177]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: truckman) by smtp.freebsd.org (Postfix) with ESMTPSA id AD980DA74; Sun, 5 May 2019 21:46:21 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Date: Sun, 5 May 2019 14:46:19 -0700 (PDT) From: Don Lewis Subject: Re: ZFS... To: Michelle Sullivan cc: "N.J. Mann" , freebsd-stable In-Reply-To: <914C23D6-4C44-4640-9215-3EA3E779BC47@sorbs.net> Message-ID: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <6D7D690B-31DA-4A86-BB34-64A977B91D4F@sorbs.net> <22E6AED197D46F831645E296@triton.njm.me.uk> <51f1813c-5666-33a4-2155-59ba706a1948@sorbs.net> <0D0C3E6C-3ED3-4629-BD26-B4D23ABC3800@sorbs.net> <5D9EAA497C84C50D0DAA5491@triton.njm.me.uk> <914C23D6-4C44-4640-9215-3EA3E779BC47@sorbs.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; CHARSET=iso-8859-7 Content-Transfer-Encoding: QUOTED-PRINTABLE Content-Disposition: INLINE X-Rspamd-Queue-Id: 7F39C72A96 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.97 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.97)[-0.974,0]; ASN(0.00)[asn:11403, ipnet:96.47.64.0/20, country:US] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 05 May 2019 21:46:23 -0000 On 3 May, Michelle Sullivan wrote: >=20 >=20 > Michelle Sullivan > http://www.mhix.org/ > Sent from my iPad >=20 >> On 03 May 2019, at 03:18, N.J. Mann wrote: >>=20 >> Hi, >>=20 >>=20 >> On Friday, May 03, 2019 03:00:05 +1000 Michelle Sullivan >> wrote: >>>>> I am sorry to hear about your loss of data, but where does the >>>>> 11kV come from? I can understand 415V, i.e. two phases in contact, >>>>> but the type of overhead lines in the pictures you reference are >>>>> three phase each typically 240V to neutral and 415V between two >>>>> phases. >>>>>=20 >>>> Bottom lines on the power pole are normal 240/415 .. top lines are >>>> the 11KV distribution network. >>>=20 >>> Oh and just so you know, it=A2s sorta impossible to get 415 down a >>> 240v connection >>=20 >> No it is not. As I said, if two phases come into contact you can >> have 415v between live and neutral. >>=20 >>=20 >=20 > You=A2re not an electrician then.. the connection point on my house has > the earth connected to the return on the pole and that also connected > to the ground stake (using 16mm copper). You=A2d have to cut that link > before dropping a phase on the return to get 415 past the distribution > board... sorta impossible... cut the ground link first then it=A2s > possible... but as every connection has the same, that=A2s a lot of > ground links to cut to make it happen... unless you drop the return on > both sizes of your pole and your ground stake and then drop a phase on > that floating terminal ... A friend had a similar catastrophic UPS failure several years ago. In her case utility power was 120V single-phase, or 240V hot to hot. Neutral was bonded to ground at the meter box. Under normal circumstances, any current imbalance between the two hot legs returns to the utility distribution transformer center tap over the neutral wire. In her case, the neutral connection failed at the pole end of her power line. In that case, the imbalance current was forced to return via the ground rod outside her house and then through some combination of the ground rods at neighboring houses and the transformer ground connection at the base of the pole. Any resistance in this path will reduce the hot to neutral voltage of the heavily loaded side and increase the voltage by the same amount on the lightly loaded side. Fire code specifies a maximum 25 ohm ground resistance, but it seems this is seldom actually measured. In addition her house was old, so there is no telling what the ground resistance actually was. If we assume a 25 ohm resistance, it only takes 1 amp of imbalance current to increase the voltage on the lightly loaded side by 25V. At that rate, it doesn't require much to exceed the continuous maximum voltage rating of the protective MOVs in the UPS. Once you get past that point, the magic smoke escapes. The UPS was actually a spare that I had lent her. I thought about repairing it by replacing the MOVs after I got it back from her, but I abandoned that plan after I opened the UPS and found this insides were heavily coated with a layer of conductive-looking soot. Two of the MOVs were pretty much obliterated. The third was intact, but charred a bit by its neighbors. From owner-freebsd-stable@freebsd.org Mon May 6 09:24:52 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4E1071583BF9 for ; Mon, 6 May 2019 09:24:52 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from kagate.punkt.de (kagate.punkt.de [217.29.33.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F250B8F71C for ; Mon, 6 May 2019 09:24:45 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from hugo10.ka.punkt.de (hugo10.ka.punkt.de [217.29.44.10]) by gate2.intern.punkt.de with ESMTP id x469OasW042525; Mon, 6 May 2019 11:24:36 +0200 (CEST) Received: from [217.29.44.36] ([217.29.44.36]) by hugo10.ka.punkt.de (8.14.2/8.14.2) with ESMTP id x469OZ6Z087190; Mon, 6 May 2019 11:24:36 +0200 (CEST) (envelope-from hausen@punkt.de) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: "Patrick M. Hausen" In-Reply-To: Date: Mon, 6 May 2019 11:24:35 +0200 Cc: Karl Denninger , freebsd-stable@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <34539589-162B-4891-A68F-88F879B59650@sorbs.net> <576857a5-a5ab-eeb8-2391-992159d9c4f2@denninger.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: F250B8F71C X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of hausen@punkt.de designates 217.29.33.131 as permitted sender) smtp.mailfrom=hausen@punkt.de X-Spamd-Result: default: False [-1.72 / 15.00]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:217.29.32.0/20]; MV_CASE(0.50)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[mailin.pluspunkthosting.de,mailin.pluspunkthosting.de]; NEURAL_HAM_SHORT(-0.15)[-0.145,0]; SUBJ_ALL_CAPS(0.45)[6]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16188, ipnet:217.29.32.0/20, country:DE]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.98)[-0.982,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[punkt.de]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[131.33.29.217.list.dnswl.org : 127.0.10.0]; IP_SCORE(-0.23)[ipnet: 217.29.32.0/20(-0.65), asn: 16188(-0.52), country: DE(-0.01)] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 May 2019 09:24:52 -0000 Hi! > Am 01.05.2019 um 02:14 schrieb Michelle Sullivan : > And the irony is the FreeBSD policy to default to zfs on new installs = using the complete drive.. even when there is only one disk available = and regardless of the cpu or ram class... with one usb stick I have = around here it attempted to use zfs on one of my laptops. But *any* filesystem other than ZFS on a single disk and non-ECC memory = is worse! So what=E2=80=99s gained by defaulting back to UFS in these cases? There=E2=80=99s the edge case of embedded/very low memory systems but = people who build these probably know what they are doing? And of course I use UFS = in VMs running on a host with ZFS =E2=80=A6 depending on whether I need the = snapshot/replication features in the guest or not. Kind regards, Patrick --=20 punkt.de GmbH Internet - Dienstleistungen - Beratung Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 76133 Karlsruhe info@punkt.de http://punkt.de AG Mannheim 108285 Gf: Juergen Egeling From owner-freebsd-stable@freebsd.org Mon May 6 09:27:24 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 484881583D19 for ; Mon, 6 May 2019 09:27:24 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from kagate.punkt.de (kagate.punkt.de [217.29.33.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DCB5F8F802 for ; Mon, 6 May 2019 09:27:22 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from hugo10.ka.punkt.de (hugo10.ka.punkt.de [217.29.44.10]) by gate2.intern.punkt.de with ESMTP id x469RJOX042586; Mon, 6 May 2019 11:27:19 +0200 (CEST) Received: from [217.29.44.36] ([217.29.44.36]) by hugo10.ka.punkt.de (8.14.2/8.14.2) with ESMTP id x469RI6o087322; Mon, 6 May 2019 11:27:19 +0200 (CEST) (envelope-from hausen@punkt.de) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: "Patrick M. Hausen" In-Reply-To: <20190430102024.E84286@mulder.mintsol.com> Date: Mon, 6 May 2019 11:27:18 +0200 Cc: Michelle Sullivan , freebsd-stable@freebsd.org, Karl Denninger Content-Transfer-Encoding: quoted-printable Message-Id: <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> To: Walter Cramer X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: DCB5F8F802 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of hausen@punkt.de designates 217.29.33.131 as permitted sender) smtp.mailfrom=hausen@punkt.de X-Spamd-Result: default: False [-1.71 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.98)[-0.983,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+ip4:217.29.32.0/20]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[punkt.de]; TO_DN_SOME(0.00)[]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: mailin.pluspunkthosting.de]; NEURAL_HAM_SHORT(-0.14)[-0.143,0]; RCVD_IN_DNSWL_NONE(0.00)[131.33.29.217.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.23)[ipnet: 217.29.32.0/20(-0.63), asn: 16188(-0.51), country: DE(-0.01)]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16188, ipnet:217.29.32.0/20, country:DE]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 May 2019 09:27:24 -0000 Hi! > Am 30.04.2019 um 18:07 schrieb Walter Cramer : > With even a 1Gbit ethernet connection to your main system, savvy use = of (say) rsync (net/rsync in Ports), and the sort of "know your data / = divide & conquer" tactics that Karl mentions, you should be able to = complete initial backups (on both backup servers) in <1 month. After = that - rsync can generally do incremental backups far, far faster. ZFS can do incremental snapshots and send/receive much faster than rsync on the file level. And e.g. FreeNAS comes with all the bells and = whistles already in place - just a matter of point and click to replicate one set of = datasets on one server to another one =E2=80=A6 *Local* replication is a piece of cake today, if you have the hardware. Kind regards, Patrick --=20 punkt.de GmbH Internet - Dienstleistungen - Beratung Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 76133 Karlsruhe info@punkt.de http://punkt.de AG Mannheim 108285 Gf: Juergen Egeling From owner-freebsd-stable@freebsd.org Mon May 6 12:23:27 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4B673158932E for ; Mon, 6 May 2019 12:23:27 +0000 (UTC) (envelope-from wfc@mintsol.com) Received: from scully.mintsol.com (scully.mintsol.com [199.182.77.206]) by mx1.freebsd.org (Postfix) with ESMTP id 80D216E206 for ; Mon, 6 May 2019 12:23:26 +0000 (UTC) (envelope-from wfc@mintsol.com) Received: from mintsol.com (officecc.mintsol.com [96.85.114.33]) by scully.mintsol.com with esmtp; Mon, 06 May 2019 08:23:20 -0400 id 00ACDC5F.000000005CD02738.000102AA Received: from localhost (localhost [127.0.0.1]) (IDENT: uid 1002) by mintsol.com with esmtp; Mon, 06 May 2019 08:23:20 -0400 id 0000081E.5CD02738.000170C2 Date: Mon, 6 May 2019 08:23:20 -0400 (EDT) From: Walter Cramer To: "Patrick M. Hausen" cc: Michelle Sullivan , freebsd-stable@freebsd.org, Karl Denninger Subject: Re: ZFS... In-Reply-To: <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> Message-ID: <20190506080804.Y87441@mulder.mintsol.com> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> MIME-Version: 1.0 Content-ID: <20190506081556.D87441@mulder.mintsol.com> X-Rspamd-Queue-Id: 80D216E206 X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of wfc@mintsol.com designates 199.182.77.206 as permitted sender) smtp.mailfrom=wfc@mintsol.com X-Spamd-Result: default: False [-4.45 / 15.00]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:scully.mintsol.com]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[bmx01.pofox.com]; CTYPE_MIXED_BOGUS(1.00)[]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_HAM_SHORT(-0.97)[-0.966,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+]; ASN(0.00)[asn:22768, ipnet:199.182.77.0/24, country:US]; IP_SCORE(-2.72)[ip: (-7.13), ipnet: 199.182.77.0/24(-3.56), asn: 22768(-2.85), country: US(-0.06)]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/mixed,text/plain]; DMARC_NA(0.00)[mintsol.com]; TO_MATCH_ENVRCPT_SOME(0.00)[] Content-Type: TEXT/PLAIN; CHARSET=X-UNKNOWN; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 May 2019 12:23:27 -0000 On Mon, 6 May 2019, Patrick M. Hausen wrote: > Hi! > >> Am 30.04.2019 um 18:07 schrieb Walter Cramer : >> With even a 1Gbit ethernet connection to your main system, savvy use of= =20 >> (say) rsync (net/rsync in Ports), and the sort of "know your data /=20 >> divide & conquer" tactics that Karl mentions, you should be able to=20 >> complete initial backups (on both backup servers) in <1 month. After=20 >> that - rsync can generally do incremental backups far, far faster. > > ZFS can do incremental snapshots and send/receive much faster than rsync= =20 > on the file level. And e.g. FreeNAS comes with all the bells and=20 > whistles already in place - just a matter of point and click to=20 > replicate one set of datasets on one server to another one =E2=80=A6 > True. But I was making a brief suggestion to Michelle - who does not seem= =20 to be a trusting fan of ZFS - hoping that she might actually implement it,= =20 or something similar. Or at least an already-tediously-long mailing list= =20 thread would end. Rsync is good enough for her situation, and would let=20 her use UFS on her off-site backup servers, if she preferred that. > > *Local* replication is a piece of cake today, if you have the hardware. > > Kind regards, > Patrick > --=20 > punkt.de GmbH=09=09=09Internet - Dienstleistungen - Beratung > Kaiserallee 13a=09=09=09Tel.: 0721 9109-0 Fax: -100 > 76133 Karlsruhe=09=09=09info@punkt.de=09http://punkt.de > AG Mannheim 108285=09=09Gf: Juergen Egeling From owner-freebsd-stable@freebsd.org Mon May 6 14:14:26 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D5A87158C14B for ; Mon, 6 May 2019 14:14:25 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id CEDCE71AE1 for ; Mon, 6 May 2019 14:14:20 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR3007BZ6UXYH20@hades.sorbs.net> for freebsd-stable@freebsd.org; Mon, 06 May 2019 07:28:13 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: <20190506080804.Y87441@mulder.mintsol.com> Date: Tue, 07 May 2019 00:14:09 +1000 Cc: "Patrick M. Hausen" , freebsd-stable@freebsd.org, Karl Denninger Content-transfer-encoding: quoted-printable Message-id: <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> To: Walter Cramer X-Rspamd-Queue-Id: CEDCE71AE1 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.84 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.99)[-0.992,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[battlestar.sorbs.net,anaconda.sorbs.net,ninja.sorbs.net,catapilla.sorbs.net,scorpion.sorbs.net,desperado.sorbs.net]; NEURAL_HAM_SHORT(-0.71)[-0.705,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.39)[ip: (-0.98), ipnet: 72.12.192.0/19(-0.50), asn: 11114(-0.39), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 May 2019 14:14:26 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 06 May 2019, at 22:23, Walter Cramer wrote: >=20 >> On Mon, 6 May 2019, Patrick M. Hausen wrote: >>=20 >> Hi! >>=20 >>> Am 30.04.2019 um 18:07 schrieb Walter Cramer : >=20 >>> With even a 1Gbit ethernet connection to your main system, savvy use of (= say) rsync (net/rsync in Ports), and the sort of "know your data / divide & c= onquer" tactics that Karl mentions, you should be able to complete initial b= ackups (on both backup servers) in <1 month. After that - rsync can general= ly do incremental backups far, far faster. >>=20 >> ZFS can do incremental snapshots and send/receive much faster than rsync o= n the file level. And e.g. FreeNAS comes with all the bells and whistles alr= eady in place - just a matter of point and click to replicate one set of dat= asets on one server to another one =E2=80=A6 >>=20 > True. But I was making a brief suggestion to Michelle - who does not seem= to be a trusting fan of ZFS - hoping that she might actually implement it, I implemented it for 8 years. It=E2=80=99s great on enterprise hardware in enterprise dcs (except when it i= sn=E2=80=99t, but that=E2=80=99s a rare occurrence..as I have found).. but i= t is (in my experience) an absolute f***ing disaster waiting to happen on an= y consumer hardware... how many laptops do you know with more than one driv= e? =20 My issue here (and not really what the blog is about) FreeBSD is defaulting t= o it. FreeBSD used to be targeted at enterprise and devs (which is where I f= ound it)... however the last few years have been a big push into the consume= r (compete with Linux) market.. so you have an OS that concerns itself with t= he desktop and upgrade after upgrade after upgrade (not just patching securi= ty issues, but upgrades as well.. just like windows and OSX)... I get it.. t= he money is in the keeping of the user base.. but then you install a file sy= stem which is dangerous on a single disk by default... dangerous because it=E2= =80=99s trusted and =E2=80=9Ccan=E2=80=99t fail=E2=80=9D .. until it goes ti= tsup.com and then the entire drive is lost and all the data on it.. it=E2=80= =99s the double standard... advocate you need ECC ram, multiple vdevs etc, t= hen single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh! =20 Back to installing windows 7 (yes really!) and the zfs file recovery tool so= meone made... (yes really!) > or something similar. Or at least an already-tediously-long mailing list t= hread would end. Rsync is good enough for her situation, and would let her u= se UFS on her off-site backup servers, if she preferred that. Upon reflection as most data on the drive is write once read lots, yes I sho= uld have. This machine is mostly used as a large media server, media is put on, it is c= ataloged and moved around to logical places, then it never changes until it=E2= =80=99s deleted. I made the mistake of moving stuff onto it to reshuffle the main data server= when it died... I have no backups of some critical data that=E2=80=99s why I= =E2=80=99m p**sed.. it=E2=80=99s not FreeBSD or ZFSs fault, it=E2=80=99s my= own stupidity for trusting ZFS would be good for a couple of weeks whilst I= got everything organized... Michelle >>=20 >> *Local* replication is a piece of cake today, if you have the hardware. >>=20 >> Kind regards, >> Patrick >> --=20 >> punkt.de GmbH Internet - Dienstleistungen - Beratung >> Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 >> 76133 Karlsruhe info@punkt.de http://punkt.de >> AG Mannheim 108285 Gf: Juergen Egeling > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Tue May 7 00:53:10 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 358C2159A0E0 for ; Tue, 7 May 2019 00:53:10 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "gromit.dlib.vt.edu", Issuer "Chumby Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 46ECB6A744 for ; Tue, 7 May 2019 00:53:09 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from mather.gromit23.net (c-98-244-101-97.hsd1.va.comcast.net [98.244.101.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gromit.dlib.vt.edu (Postfix) with ESMTPSA id 2C23CC6; Mon, 6 May 2019 20:53:02 -0400 (EDT) Content-Type: text/plain; charset=utf-8; delsp=yes; format=flowed Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Paul Mather In-Reply-To: <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> Date: Mon, 6 May 2019 20:53:01 -0400 Cc: freebsd-stable Content-Transfer-Encoding: 8bit Message-Id: <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: 46ECB6A744 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dmarc=fail reason="" header.from=vt.edu (policy=none) X-Spamd-Result: default: False [-2.84 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; FROM_HAS_DN(0.00)[]; DMARC_POLICY_SOFTFAIL(0.10)[vt.edu : No valid SPF, No valid DKIM,none]; MV_CASE(0.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(-0.98)[ip: (-2.51), ipnet: 128.173.0.0/16(-1.26), asn: 1312(-1.09), country: US(-0.06)]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[chumby.dlib.vt.edu,gromit.dlib.vt.edu]; RCPT_COUNT_TWO(0.00)[2]; SUBJ_ALL_CAPS(0.45)[6]; R_SPF_NA(0.00)[]; NEURAL_HAM_SHORT(-0.79)[-0.794,0]; RECEIVED_SPAMHAUS_PBL(0.00)[97.101.244.98.zen.spamhaus.org : 127.0.0.10]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:1312, ipnet:128.173.0.0/16, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; FROM_EQ_ENVFROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 00:53:10 -0000 On May 6, 2019, at 10:14 AM, Michelle Sullivan wrote: > My issue here (and not really what the blog is about) FreeBSD is > defaulting to it. You've said this at least twice now in this thread so I'm assuming you're asserting it to be true. As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does NOT default to ZFS. The images distributed by freebsd.org, e.g., Vagrant boxes, ARM images, EC2 instances, etc., contain disk images where FreeBSD resides on UFS. For example, here's what you end up with when you launch a 12.0-RELEASE instance using defaults on AWS (us-east-1 region: ami-03b0f822e17669866): root@freebsd:/usr/home/ec2-user # gpart show => 3 20971509 ada0 GPT (10G) 3 123 1 freebsd-boot (62K) 126 20971386 2 freebsd-ufs (10G) And this is what you get when you "vagrant up" the freebsd/FreeBSD-12.0-RELEASE box: root@freebsd:/home/vagrant # gpart show => 3 65013755 ada0 GPT (31G) 3 123 1 freebsd-boot (62K) 126 2097152 2 freebsd-swap (1.0G) 2097278 62914560 3 freebsd-ufs (30G) 65011838 1920 - free - (960K) When you install from the 12.0-RELEASE ISO, the first option listed during the partitioning stage is "Auto (UFS) Guided Disk Setup". The last option listed---after "Open a shell and partition by hand" is "Auto (ZFS) Guided Root-on-ZFS". In other words, you have to skip over UFS and manual partitioning to select the ZFS install option. So, I don't see what evidence there is that FreeBSD is defaulting to ZFS. It hasn't up to now. Will FreeBSD 13 default to ZFS? > FreeBSD used to be targeted at enterprise and devs (which is where I found it)... however the last few years have been a big push into the consumer (compete with Linux) market.. so you have an OS that concerns itself with the desktop and upgrade after upgrade after upgrade (not just patching security issues, but upgrades as well.. just like windows and OSX)... I get it.. the money is in the keeping of the user base.. but then you install a file system which is dangerous on a single disk by default... dangerous because it’s trusted and “can’t fail” .. until it goes titsup.com and then the entire drive is lost and all the data on it.. it’s the double standard... advocate you need ECC ram, multiple vdevs etc, then single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh! As people have pointed out elsewhere in this thread, it's false to claim that ZFS is unsafe on consumer hardware. It's no less safe than UFS on single-disk setups. Because anecdote is not evidence, I will refrain from saying, "I've lost far more data on UFS than I have on ZFS (especially when SUJ was shaking out its bugs)..." >;-) What I will agree with is that, probably due to its relative youth, ZFS has less forensics/data recovery tools than UFS. I'm sure this will improve as time goes on. (I even posted a link to an article describing someone adding ZFS support to a forensics toolkit earlier in this thread.) Cheers, Paul. From owner-freebsd-stable@freebsd.org Tue May 7 05:02:08 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 24168159F38F for ; Tue, 7 May 2019 05:02:08 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 5C3AE711AD for ; Tue, 7 May 2019 05:02:06 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR40041BBYPZT10@hades.sorbs.net> for freebsd-stable@freebsd.org; Mon, 06 May 2019 22:16:04 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> Date: Tue, 07 May 2019 15:02:01 +1000 Cc: freebsd-stable Content-transfer-encoding: quoted-printable Message-id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Paul Mather X-Rspamd-Queue-Id: 5C3AE711AD X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.23 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.99)[-0.993,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.38)[ip: (-0.95), ipnet: 72.12.192.0/19(-0.49), asn: 11114(-0.39), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.10)[-0.100,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 05:02:08 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 07 May 2019, at 10:53, Paul Mather wrote: >=20 >> On May 6, 2019, at 10:14 AM, Michelle Sullivan wrote= : >>=20 >> My issue here (and not really what the blog is about) FreeBSD is defaulti= ng to it. >=20 > You've said this at least twice now in this thread so I'm assuming you're a= sserting it to be true. >=20 > As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does NOT de= fault to ZFS. >=20 > The images distributed by freebsd.org, e.g., Vagrant boxes, ARM images, EC= 2 instances, etc., contain disk images where FreeBSD resides on UFS. For ex= ample, here's what you end up with when you launch a 12.0-RELEASE instance u= sing defaults on AWS (us-east-1 region: ami-03b0f822e17669866): >=20 > root@freebsd:/usr/home/ec2-user # gpart show > =3D> 3 20971509 ada0 GPT (10G) > 3 123 1 freebsd-boot (62K) > 126 20971386 2 freebsd-ufs (10G) >=20 > And this is what you get when you "vagrant up" the freebsd/FreeBSD-12.0-RE= LEASE box: >=20 > root@freebsd:/home/vagrant # gpart show > =3D> 3 65013755 ada0 GPT (31G) > 3 123 1 freebsd-boot (62K) > 126 2097152 2 freebsd-swap (1.0G) > 2097278 62914560 3 freebsd-ufs (30G) > 65011838 1920 - free - (960K) >=20 >=20 > When you install from the 12.0-RELEASE ISO, the first option listed during= the partitioning stage is "Auto (UFS) Guided Disk Setup". The last option= listed---after "Open a shell and partition by hand" is "Auto (ZFS) Guided R= oot-on-ZFS". In other words, you have to skip over UFS and manual partition= ing to select the ZFS install option. >=20 > So, I don't see what evidence there is that FreeBSD is defaulting to ZFS. = It hasn't up to now. Will FreeBSD 13 default to ZFS? >=20 Umm.. well I install by memory stick images and I had a 10.2 and an 11.0 bot= h of which had root on zfs as the default.. I had to manually change them. I= haven=E2=80=99t looked at anything later... so did something change? Am I= in cloud cookoo land? >=20 >> FreeBSD used to be targeted at enterprise and devs (which is where I foun= d it)... however the last few years have been a big push into the consumer (= compete with Linux) market.. so you have an OS that concerns itself with the= desktop and upgrade after upgrade after upgrade (not just patching security= issues, but upgrades as well.. just like windows and OSX)... I get it.. the= money is in the keeping of the user base.. but then you install a file syst= em which is dangerous on a single disk by default... dangerous because it=E2= =80=99s trusted and =E2=80=9Ccan=E2=80=99t fail=E2=80=9D .. until it goes ti= tsup.com and then the entire drive is lost and all the data on it.. it=E2=80= =99s the double standard... advocate you need ECC ram, multiple vdevs etc, t= hen single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh! >=20 >=20 > As people have pointed out elsewhere in this thread, it's false to claim t= hat ZFS is unsafe on consumer hardware. It's no less safe than UFS on singl= e-disk setups. >=20 > Because anecdote is not evidence, I will refrain from saying, "I've lost f= ar more data on UFS than I have on ZFS (especially when SUJ was shaking out i= ts bugs)..." >;-) >=20 > What I will agree with is that, probably due to its relative youth, ZFS ha= s less forensics/data recovery tools than UFS. I'm sure this will improve a= s time goes on. (I even posted a link to an article describing someone addi= ng ZFS support to a forensics toolkit earlier in this thread.) The problem I see with that statement is that the zfs dev mailing lists cons= tantly and consistently following the line of, the data is always right ther= e is no need for a =E2=80=9Cfsck=E2=80=9D (which I actually get) but it=E2=80= =99s used to shut down every thread... the irony is I=E2=80=99m now installi= ng windows 7 and SP1 on a usb stick (well it=E2=80=99s actually installed, b= ut sp1 isn=E2=80=99t finished yet) so I can install a zfs data recovery tool= which reports to be able to =E2=80=9Cwalk the data=E2=80=9D to retrieve all= the files... the irony eh... install windows7 on a usb stick to recover a = FreeBSD installed zfs filesystem... will let you know if the tool works, bu= t as it was recommended by a dev I=E2=80=99m hopeful... have another array (= with zfs I might add) loaded and ready to go... if the data recovery is succ= essful I=E2=80=99ll blow away the original machine and work out what OS and d= rive setup will be safe for the data in the future. I might even put FreeBS= D and zfs back on it, but if I do it won=E2=80=99t be in the current Zraid2 c= onfig. >=20 > Cheers, >=20 > Paul. From owner-freebsd-stable@freebsd.org Tue May 7 13:03:06 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E8CAF15854B3 for ; Tue, 7 May 2019 13:03:05 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "gromit.dlib.vt.edu", Issuer "Chumby Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id D2F61886E9 for ; Tue, 7 May 2019 13:03:02 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from mather.gromit23.net (c-98-244-101-97.hsd1.va.comcast.net [98.244.101.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gromit.dlib.vt.edu (Postfix) with ESMTPSA id 7CD5CF6; Tue, 7 May 2019 09:03:01 -0400 (EDT) Content-Type: text/plain; charset=utf-8; delsp=yes; format=flowed Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Paul Mather In-Reply-To: Date: Tue, 7 May 2019 09:03:00 -0400 Cc: freebsd-stable Content-Transfer-Encoding: 8bit Message-Id: <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: D2F61886E9 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dmarc=fail reason="" header.from=vt.edu (policy=none) X-Spamd-Result: default: False [-2.97 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; FROM_HAS_DN(0.00)[]; DMARC_POLICY_SOFTFAIL(0.10)[vt.edu : No valid SPF, No valid DKIM,none]; MV_CASE(0.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(-0.94)[ip: (-2.40), ipnet: 128.173.0.0/16(-1.20), asn: 1312(-1.05), country: US(-0.06)]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: chumby.dlib.vt.edu]; RCPT_COUNT_TWO(0.00)[2]; SUBJ_ALL_CAPS(0.45)[6]; R_SPF_NA(0.00)[]; NEURAL_HAM_SHORT(-0.96)[-0.963,0]; RECEIVED_SPAMHAUS_PBL(0.00)[97.101.244.98.zen.spamhaus.org : 127.0.0.10]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:1312, ipnet:128.173.0.0/16, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; FROM_EQ_ENVFROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 13:03:06 -0000 On May 7, 2019, at 1:02 AM, Michelle Sullivan wrote: >> On 07 May 2019, at 10:53, Paul Mather wrote: >> >>> On May 6, 2019, at 10:14 AM, Michelle Sullivan >>> wrote: >>> >>> My issue here (and not really what the blog is about) FreeBSD is >>> defaulting to it. >> >> You've said this at least twice now in this thread so I'm assuming >> you're asserting it to be true. >> >> As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does NOT >> default to ZFS. >> >> The images distributed by freebsd.org, e.g., Vagrant boxes, ARM images, >> EC2 instances, etc., contain disk images where FreeBSD resides on UFS. >> For example, here's what you end up with when you launch a 12.0-RELEASE >> instance using defaults on AWS (us-east-1 region: ami-03b0f822e17669866): >> >> root@freebsd:/usr/home/ec2-user # gpart show >> => 3 20971509 ada0 GPT (10G) >> 3 123 1 freebsd-boot (62K) >> 126 20971386 2 freebsd-ufs (10G) >> >> And this is what you get when you "vagrant up" the >> freebsd/FreeBSD-12.0-RELEASE box: >> >> root@freebsd:/home/vagrant # gpart show >> => 3 65013755 ada0 GPT (31G) >> 3 123 1 freebsd-boot (62K) >> 126 2097152 2 freebsd-swap (1.0G) >> 2097278 62914560 3 freebsd-ufs (30G) >> 65011838 1920 - free - (960K) >> >> >> When you install from the 12.0-RELEASE ISO, the first option listed >> during the partitioning stage is "Auto (UFS) Guided Disk Setup". The >> last option listed---after "Open a shell and partition by hand" is "Auto >> (ZFS) Guided Root-on-ZFS". In other words, you have to skip over UFS >> and manual partitioning to select the ZFS install option. >> >> So, I don't see what evidence there is that FreeBSD is defaulting to >> ZFS. It hasn't up to now. Will FreeBSD 13 default to ZFS? > > Umm.. well I install by memory stick images and I had a 10.2 and an 11.0 > both of which had root on zfs as the default.. I had to manually change > them. I haven’t looked at anything later... so did something change? Am > I in cloud cookoo land? I don't know about that, but you may well be misremembering. I just pulled down the 10.2 and 11.0 installers from http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in both cases the choices listed in the "Partitioning" step are the same as in the current 12.0 installer: "Auto (UFS) Guided Disk Setup" is listed first and selected by default. "Auto (ZFS) Guided Root-on-ZFS" is listed last (you have to skip past other options such as manually partitioning by hand to select it). I'm confident in saying that ZFS is (or was) not the default partitioning option in either 10.2 or 11.0 as officially released by FreeBSD. Did you use a custom installer you made yourself when installing 10.2 or 11.0? > >>> FreeBSD used to be targeted at enterprise and devs (which is where I >>> found it)... however the last few years have been a big push into the >>> consumer (compete with Linux) market.. so you have an OS that concerns >>> itself with the desktop and upgrade after upgrade after upgrade (not >>> just patching security issues, but upgrades as well.. just like windows >>> and OSX)... I get it.. the money is in the keeping of the user base.. >>> but then you install a file system which is dangerous on a single disk >>> by default... dangerous because it’s trusted and “can’t fail” .. until >>> it goes titsup.com and then the entire drive is lost and all the data >>> on it.. it’s the double standard... advocate you need ECC ram, >>> multiple vdevs etc, then single drive it.. sorry.. which one is it? >>> Gaaaaaarrrrrrrgggghhhhhhh! >> >> >> As people have pointed out elsewhere in this thread, it's false to claim >> that ZFS is unsafe on consumer hardware. It's no less safe than UFS on >> single-disk setups. >> >> Because anecdote is not evidence, I will refrain from saying, "I've lost >> far more data on UFS than I have on ZFS (especially when SUJ was shaking >> out its bugs)..." >;-) >> >> What I will agree with is that, probably due to its relative youth, ZFS >> has less forensics/data recovery tools than UFS. I'm sure this will >> improve as time goes on. (I even posted a link to an article describing >> someone adding ZFS support to a forensics toolkit earlier in this >> thread.) > > The problem I see with that statement is that the zfs dev mailing lists > constantly and consistently following the line of, the data is always > right there is no need for a “fsck” (which I actually get) but it’s used > to shut down every thread... the irony is I’m now installing windows 7 > and SP1 on a usb stick (well it’s actually installed, but sp1 isn’t > finished yet) so I can install a zfs data recovery tool which reports to > be able to “walk the data” to retrieve all the files... the irony eh... > install windows7 on a usb stick to recover a FreeBSD installed zfs > filesystem... will let you know if the tool works, but as it was > recommended by a dev I’m hopeful... have another array (with zfs I might > add) loaded and ready to go... if the data recovery is successful I’ll > blow away the original machine and work out what OS and drive setup will > be safe for the data in the future. I might even put FreeBSD and zfs > back on it, but if I do it won’t be in the current Zraid2 config. There is no more irony in installing a data recovery tool to recover a trashed ZFS pool than there is in installing one to recover a trashed UFS file system. No file system is bulletproof, which is why everyone I know recommends a backup/disaster recovery strategy commensurate with the value you place on your data. There WILL be some combination of events that will lead to irretrievable data loss. Your extraordinary sequence of mishaps apparently met the threshold for ZFS on your setup. I don't see how any of this leads to the conclusion that ZFS is "dangerous" to use as a file system. What I believe is dangerous is relying on a post-mortem crash data recovery methodology as a substitute for a backup strategy for data that, in hindsight, is considered important enough to keep. No matter how resilient ZFS or UFS may be, they are no substitute for backups when it comes to data you care about. (File system resiliency will not protect you, e.g., from Ransomware or other malicious or accidental acts of data destruction.) Cheers, Paul. From owner-freebsd-stable@freebsd.org Tue May 7 13:47:05 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1E40E1587863 for ; Tue, 7 May 2019 13:47:05 +0000 (UTC) (envelope-from karl@denninger.net) Received: from colo1.denninger.net (colo1.denninger.net [104.236.120.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3DC0C89E57 for ; Tue, 7 May 2019 13:47:04 +0000 (UTC) (envelope-from karl@denninger.net) Received: from denninger.net (ip68-1-57-197.pn.at.cox.net [68.1.57.197]) by colo1.denninger.net (Postfix) with ESMTP id A23192110BA for ; Tue, 7 May 2019 09:46:27 -0400 (EDT) Received: from [192.168.10.17] (D7.Denninger.Net [192.168.10.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by denninger.net (Postfix) with ESMTPSA id ACE22EE743 for ; Tue, 7 May 2019 08:46:26 -0500 (CDT) Subject: Re: ZFS... To: freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> From: Karl Denninger Openpgp: preference=signencrypt Autocrypt: addr=karl@denninger.net; prefer-encrypt=mutual; keydata= mQINBFIX1zsBEADRcJfsQUl9oFeoMfLPJ1kql+3sIaYx0MfJAUhV9LnbWxr0fsWCskM1O4cV tHm5dqPkuPM4Ztc0jLotD1i9ubWvCHOlkLGxFOL+pFbjA+XZ7VKsC/xWmhMwJ3cM8HavK2OV SzEWQ/AEYtMi04IzGSwsxh/5/5R0mPHrsIomV5SbuiI0vjLuDj7fo6146AABI1ULzge4hBYW i/SHrqUrLORmUNBs6bxek79/B0Dzk5cIktD3LOfbT9EAa5J/osVkstMBhToJgQttaMIGv8SG CzpR/HwEokE+7DP+k2mLHnLj6H3kfugOF9pJH8Za4yFmw//s9cPXV8WwtZ2SKfVzn1unpKqf wmJ1PwJoom/d4fGvQDkgkGKRa6RGC6tPmXnqnx+YX4iCOdFfbP8L9rmk2sewDDVzHDU3I3ZZ 8hFIjMYM/QXXYszRatK0LCV0QPZuF7LCf4uQVKw1/oyJInsnH7+6a3c0h21x+CmSja9QJ+y0 yzgEN/nM89d6YTakfR+1xkYgodVmMy/bS8kmXbUUZG/CyeqCqc95RUySjKT2ECrf9GhhoQkl +D8n2MsrAUSMGB4GQSN+TIq9OBTpNuvATGSRuF9wnQcs1iSry+JNCpfRTyWp83uCNApe6oHU EET4Et6KDO3AvjvBMAX0TInTRGW2SQlJMuFKpc7Dg7tHK8zzqQARAQABtCNLYXJsIERlbm5p bmdlciA8a2FybEBkZW5uaW5nZXIubmV0PokCPAQTAQIAJgUCUhfXOwIbIwUJCWYBgAYLCQgH AwIEFQIIAwQWAgMBAh4BAheAAAoJEG6/sivc5s0PLxQP/i6x/QFx9G4Cw7C+LthhLXIm7NSH AtNbz2UjySEx2qkoQQjtsK6mcpEEaky4ky6t8gz0/SifIfJmSmyAx0UhUQ0WBv1vAXwtNrQQ jJd9Bj6l4c2083WaXyHPjt2u2Na6YFowyb4SaQb83hu/Zs25vkPQYJVVE0JX409MFVPUa6E3 zFbd1OTr3T4yNUy4gNeQZfzDqDS8slbIks2sXeoJrZ6qqXVI0ionoivOlaN4T6Q0UYyXtigj dQvvhMt0aNowKFjRqrmSDRpdz+o6yg7Mp7qEZ1V6EZk8KqQTH6htpCTQ8i79ttK4LG6bstSF Re6Fwq52nbrcANrcdmtZXqjo+SGbUqJ8b1ggrxAsJ5MEhRh2peKrCgI/TjQo+ZxfnqEoR4AI 46Cyiz+/lcVvlvmf2iPifS3EEdaH3Itfwt7MxFm6mQORYs6skHDw3tOYB2/AdCW6eRVYs2hB RMAG4uwApZfZDKgRoE95PJmQjeTBiGmRPcsQZtNESe7I7EjHtCDLwtJqvD4HkDDQwpzreT6W XkyIJ7ns7zDfA1E+AQhFR6rsTFGgQZRZKsVeov3SbhYKkCnVDCvb/PKQCAGkSZM9SvYG5Yax 8CMry3AefKktf9fqBFg8pWqtVxDwJr56dhi0GHXRu3jVI995rMGo1fLUG5fSxiZ8L5sAtokh 9WFmQpyl Message-ID: Date: Tue, 7 May 2019 08:46:26 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-512; boundary="------------ms010102010001050009060703" X-Rspamd-Queue-Id: 3DC0C89E57 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.22 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; HAS_ATTACHMENT(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[px.denninger.net]; NEURAL_HAM_SHORT(-0.93)[-0.933,0]; SUBJ_ALL_CAPS(0.45)[6]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:14061, ipnet:104.236.64.0/18, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:+]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[197.57.1.68.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; IP_SCORE(-2.53)[ip: (-9.88), ipnet: 104.236.64.0/18(-4.17), asn: 14061(1.45), country: US(-0.06)]; DMARC_NA(0.00)[denninger.net]; R_SPF_NA(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 13:47:05 -0000 This is a cryptographically signed message in MIME format. --------------ms010102010001050009060703 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 5/7/2019 00:02, Michelle Sullivan wrote: > The problem I see with that statement is that the zfs dev mailing lists= constantly and consistently following the line of, the data is always ri= ght there is no need for a =E2=80=9Cfsck=E2=80=9D (which I actually get) = but it=E2=80=99s used to shut down every thread... the irony is I=E2=80=99= m now installing windows 7 and SP1 on a usb stick (well it=E2=80=99s actu= ally installed, but sp1 isn=E2=80=99t finished yet) so I can install a zf= s data recovery tool which reports to be able to =E2=80=9Cwalk the data=E2= =80=9D to retrieve all the files... the irony eh... install windows7 on = a usb stick to recover a FreeBSD installed zfs filesystem... will let yo= u know if the tool works, but as it was recommended by a dev I=E2=80=99m = hopeful... have another array (with zfs I might add) loaded and ready to = go... if the data recovery is successful I=E2=80=99ll blow away the origi= nal machine and work out what OS and drive setup will be safe for the dat= a in the future. I might even put FreeBSD and zfs back on it, but if I d= o it won=E2=80=99t be in the current Zraid2 config. Meh. Hardware failure is, well, hardware failure.=C2=A0 Yes, power-related failures are hardware failures. Never mind the potential for /software /failures.=C2=A0 Bugs are, well, bugs.=C2=A0 And they're a real thing.=C2=A0 Never had the shortcomings of= UFS bite you on an "unexpected" power loss?=C2=A0 Well, I have.=C2=A0 Is ZFS absol= utely safe against any such event?=C2=A0 No, but it's safe*r*. I've yet to have ZFS lose an entire pool due to something bad happening, but the same basic risk (entire filesystem being gone) has occurred more than once in my IT career with other filesystems -- including UFS, lowly MSDOS and NTFS, never mind their predecessors all the way back to floppy disks and the first 5Mb Winchesters.=C2=A0 I learned a long time ago that two is one and one is none when it comes to data, and WHEN two becomes one you SWEAT, because that second failure CAN happen at the worst possible time. As for RaidZ2 .vs. mirrored it's not as simple as you might think.=C2=A0 Mirrored vdevs can only lose one member per mirror set, unless you use three-member mirrors.=C2=A0 That sounds insane but actually it isn't in certain circumstances, such as very-read-heavy and high-performance-read environments. The short answer is that a 2-way mirrored set is materially faster on reads but has no acceleration on writes, and can lose one member per mirror.=C2=A0 If the SECOND one fails before you can resilver, and that resilver takes quite a long while if the disks are large, you're dead.=C2= =A0 However, if you do six drives as a 2x3 way mirror (that is, 3 vdevs each of a 2-way mirror) you now have three parallel data paths going at once and potentially six for reads -- and performance is MUCH better.=C2=A0 A 3-way mirror can lose two members (and could be organized as 3x2) but obviously requires lots of drive slots, 3x as much *power* per gigabyte stored (and you pay for power twice; once to buy it and again to get the heat out of the room where the machine is.) Raidz2 can also lose 2 drives without being dead.=C2=A0 However, it doesn= 't get any of the read performance improvement *and* takes a write performance penalty; Z2 has more write penalty than Z1 since it has to compute and write two parity entries instead of one, although in theory at least it can parallel those parity writes -- albeit at the cost of drive bandwidth congestion (e.g. interfering with other accesses to the same disk at the same time.)=C2=A0 In short RaidZx performs about as "wel= l" as the *slowest* disk in the set.=C2=A0 So why use it (particularly Z2) a= t all?=C2=A0 Because for "N" drives you get the protection of a 3-way mirro= r and *much* more storage.=C2=A0 A six-member RaidZ2 setup returns ~4Tb of usable space, where with a 2-way mirror it returns 3Tb and a 3-way mirror (which provides the same protection against drive failure as Z2) you have only *half* the storage.=C2=A0 IMHO ordinary Raidz isn't worth t= he trade-offs, but Z2 frequently is. In addition more spindles means more failures, all other things being equal, so if you need "X" TB of storage and organize it as 3-way mirrors you now have twice as many physical spindles which means on average you'll take twice as many faults.=C2=A0 If performance is more important = then the choice is obvious.=C2=A0 If density is more important (that is, a lot= or even most of the data is rarely accessed at all) then the choice is fairly simple too.=C2=A0 In many workloads you have some of both, and thu= s the correct choice is a hybrid arrangement; that's what I do here, because I have a lot of data that is rarely-to-never accessed and read-only but also have some data that is frequently accessed and frequently written.=C2=A0 One size does not fit all in such a workload. MOST systems, by the way, have this sort of paradigm (a huge percentage of the data is rarely read and never written) but it doesn't become economic or sane to try to separate them until you get well into the terabytes of storage range and a half-dozen or so physical volumes.=C2=A0= There's a=C2=A0 very clean argument that prior to that point but with gre= ater than one drive mirrored is always the better choice. Note that if you have an *adapter* go insane (and as I've noted here I've had it happen TWICE in my IT career!) then *all* of the data on the disks served by that adapter is screwed. It doesn't make a bit of difference what filesystem you're using in that scenario and thus you had better have a backup scheme and make sure it works as well, never mind software bugs or administrator stupidity ("dd" as root to the wrong target, for example, will reliably screw you every single time!) For a single-disk machine ZFS is no *less* safe than UFS and provides a number of advantages, with arguably the most-important being easily-used snapshots.=C2=A0 Not only does this simplify backups since coherency duri= ng the backup is never at issue and incremental backups become fast and easily-done in addition boot environments make roll-forward and even *roll-back* reasonable to implement for software updates -- a critical capability if you ever run an OS version update and something goes seriously wrong with it.=C2=A0 If you've never had that happen then consi= der yourself blessed; it's NOT fun to manage in a UFS environment and often winds up leading to a "restore from backup" scenario.=C2=A0 (To be fair i= t can be with ZFS too if you're foolish enough to upgrade the pool before being sure you're happy with the new OS rev.) --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms010102010001050009060703 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC DdgwggagMIIEiKADAgECAhMA5EiKghDOXrvfxYxjITXYDdhIMA0GCSqGSIb3DQEBCwUAMIGL MQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJTmljZXZpbGxlMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExITAf BgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQTAeFw0xNzA4MTcxNjQyMTdaFw0yNzA4 MTUxNjQyMTdaMHsxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkwFwYDVQQKDBBD dWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExJTAjBgNVBAMMHEN1 ZGEgU3lzdGVtcyBMTEMgMjAxNyBJbnQgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQC1aJotNUI+W4jP7xQDO8L/b4XiF4Rss9O0B+3vMH7Njk85fZ052QhZpMVlpaaO+sCI KqG3oNEbuOHzJB/NDJFnqh7ijBwhdWutdsq23Ux6TvxgakyMPpT6TRNEJzcBVQA0kpby1DVD 0EKSK/FrWWBiFmSxg7qUfmIq/mMzgE6epHktyRM3OGq3dbRdOUgfumWrqHXOrdJz06xE9NzY vc9toqZnd79FUtE/nSZVm1VS3Grq7RKV65onvX3QOW4W1ldEHwggaZxgWGNiR/D4eosAGFxn uYeWlKEC70c99Mp1giWux+7ur6hc2E+AaTGh+fGeijO5q40OGd+dNMgK8Es0nDRw81lRcl24 SWUEky9y8DArgIFlRd6d3ZYwgc1DMTWkTavx3ZpASp5TWih6yI8ACwboTvlUYeooMsPtNa9E 6UQ1nt7VEi5syjxnDltbEFoLYcXBcqhRhFETJe9CdenItAHAtOya3w5+fmC2j/xJz29og1KH YqWHlo3Kswi9G77an+zh6nWkMuHs+03DU8DaOEWzZEav3lVD4u76bKRDTbhh0bMAk4eXriGL h4MUoX3Imfcr6JoyheVrAdHDL/BixbMH1UUspeRuqQMQ5b2T6pabXP0oOB4FqldWiDgJBGRd zWLgCYG8wPGJGYgHibl5rFiI5Ix3FQncipc6SdUzOQIDAQABo4IBCjCCAQYwHQYDVR0OBBYE FF3AXsKnjdPND5+bxVECGKtc047PMIHABgNVHSMEgbgwgbWAFBu1oRhUMNEzjODolDka5k4Q EDBioYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJ TmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5 c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYIJAKxAy1WBo2kY MBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IC AQCB5686UCBVIT52jO3sz9pKuhxuC2npi8ZvoBwt/IH9piPA15/CGF1XeXUdu2qmhOjHkVLN gO7XB1G8CuluxofOIUce0aZGyB+vZ1ylHXlMeB0R82f5dz3/T7RQso55Y2Vog2Zb7PYTC5B9 oNy3ylsnNLzanYlcW3AAfzZcbxYuAdnuq0Im3EpGm8DoItUcf1pDezugKm/yKtNtY6sDyENj tExZ377cYA3IdIwqn1Mh4OAT/Rmh8au2rZAo0+bMYBy9C11Ex0hQ8zWcvPZBDn4v4RtO8g+K uQZQcJnO09LJNtw94W3d2mj4a7XrsKMnZKvm6W9BJIQ4Nmht4wXAtPQ1xA+QpxPTmsGAU0Cv HmqVC7XC3qxFhaOrD2dsvOAK6Sn3MEpH/YrfYCX7a7cz5zW3DsJQ6o3pYfnnQz+hnwLlz4MK 17NIA0WOdAF9IbtQqarf44+PEyUbKtz1r0KGeGLs+VGdd2FLA0e7yuzxJDYcaBTVwqaHhU2/ Fna/jGU7BhrKHtJbb/XlLeFJ24yvuiYKpYWQSSyZu1R/gvZjHeGb344jGBsZdCDrdxtQQcVA 6OxsMAPSUPMrlg9LWELEEYnVulQJerWxpUecGH92O06wwmPgykkz//UmmgjVSh7ErNvL0lUY UMfunYVO/O5hwhW+P4gviCXzBFeTtDZH259O7TCCBzAwggUYoAMCAQICEwCg0WvVwekjGFiO 62SckFwepz0wDQYJKoZIhvcNAQELBQAwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3Jp ZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBD QTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExMQyAyMDE3IEludCBDQTAeFw0xNzA4MTcyMTIx MjBaFw0yMjA4MTYyMTIxMjBaMFcxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRswGQYDVQQDDBJrYXJsQGRlbm5pbmdlci5uZXQw ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC+HVSyxVtJhy3Ohs+PAGRuO//Dha9A 16l5FPATr6wude9zjX5f2lrkRyU8vhCXTZW7WbvWZKpcZ8r0dtZmiK9uF58Ec6hhvfkxJzbg 96WHBw5Fumd5ahZzuCJDtCAWW8R7/KN+zwzQf1+B3MVLmbaXAFBuKzySKhKMcHbK3/wjUYTg y+3UK6v2SBrowvkUBC+jxNg3Wy12GsTXcUS/8FYIXgVVPgfZZrbJJb5HWOQpvvhILpPCD3xs YJFNKEPltXKWHT7Qtc2HNqikgNwj8oqOb+PeZGMiWapsatKm8mxuOOGOEBhAoTVTwUHlMNTg 6QUCJtuWFCK38qOCyk9Haj+86lUU8RG6FkRXWgMbNQm1mWREQhw3axgGLSntjjnznJr5vsvX SYR6c+XKLd5KQZcS6LL8FHYNjqVKHBYM+hDnrTZMqa20JLAF1YagutDiMRURU23iWS7bA9tM cXcqkclTSDtFtxahRifXRI7Epq2GSKuEXe/1Tfb5CE8QsbCpGsfSwv2tZ/SpqVG08MdRiXxN 5tmZiQWo15IyWoeKOXl/hKxA9KPuDHngXX022b1ly+5ZOZbxBAZZMod4y4b4FiRUhRI97r9l CxsP/EPHuuTIZ82BYhrhbtab8HuRo2ofne2TfAWY2BlA7ExM8XShMd9bRPZrNTokPQPUCWCg CdIATQIDAQABo4IBzzCCAcswPAYIKwYBBQUHAQEEMDAuMCwGCCsGAQUFBzABhiBodHRwOi8v b2NzcC5jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIF oDAOBgNVHQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDMGCWCG SAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYDVR0O BBYEFLElmNWeVgsBPe7O8NiBzjvjYnpRMIHKBgNVHSMEgcIwgb+AFF3AXsKnjdPND5+bxVEC GKtc047PoYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UE BwwJTmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRh IFN5c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYITAORIioIQ zl6738WMYyE12A3YSDAdBgNVHREEFjAUgRJrYXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcN AQELBQADggIBAJXboPFBMLMtaiUt4KEtJCXlHO/3ZzIUIw/eobWFMdhe7M4+0u3te0sr77QR dcPKR0UeHffvpth2Mb3h28WfN0FmJmLwJk+pOx4u6uO3O0E1jNXoKh8fVcL4KU79oEQyYkbu 2HwbXBU9HbldPOOZDnPLi0whi/sbFHdyd4/w/NmnPgzAsQNZ2BYT9uBNr+jZw4SsluQzXG1X lFL/qCBoi1N2mqKPIepfGYF6drbr1RnXEJJsuD+NILLooTNf7PMgHPZ4VSWQXLNeFfygoOOK FiO0qfxPKpDMA+FHa8yNjAJZAgdJX5Mm1kbqipvb+r/H1UAmrzGMbhmf1gConsT5f8KU4n3Q IM2sOpTQe7BoVKlQM/fpQi6aBzu67M1iF1WtODpa5QUPvj1etaK+R3eYBzi4DIbCIWst8MdA 1+fEeKJFvMEZQONpkCwrJ+tJEuGQmjoQZgK1HeloepF0WDcviiho5FlgtAij+iBPtwMuuLiL shAXA5afMX1hYM4l11JXntle12EQFP1r6wOUkpOdxceCcMVDEJBBCHW2ZmdEaXgAm1VU+fnQ qS/wNw/S0X3RJT1qjr5uVlp2Y0auG/eG0jy6TT0KzTJeR9tLSDXprYkN2l/Qf7/nT6Q03qyE QnnKiBXWAZXveafyU/zYa7t3PTWFQGgWoC4w6XqgPo4KV44OMYIFBzCCBQMCAQEwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBglghkgBZQMEAgMFAKCCAkUw GAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNTA3MTM0NjI2 WjBPBgkqhkiG9w0BCQQxQgRAOYQkBDX0w1SyuZowvLyKERKTu0KfFWvNwvQ+h7FPK7xM4F3D MftnjzqjVhuDuhA4Qntbz4XEDZXJkKUo+kFQIzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFl AwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3 DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGjBgkrBgEEAYI3EAQxgZUwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTCBpQYLKoZIhvcNAQkQAgsxgZWg gZIwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lz dGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0 ZW1zIExMQyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBgkqhkiG9w0BAQEF AASCAgCKguBHkVXQdYg1mTgrRWpiQ3hv0b1FJQV9IOkIq9jYlahMylctdekrZjWiLHqX8+HW s7nzHZahyYiA1ke6JVxloYrm2LLL9Sj0Jo9CEhwyky8aAYY4JcqNJ3ehN5+wHyIEaiHteB88 hXXobjpQA9aDRSXozH3njZ7zdRxaYCWg/FkMLILGbknoLM4uhn6ToCnSLKJD1FVXTBoFoc+b uHbbo3Ueo8/vNZIXNWR7k85yZXHhEDE7OPhnwGH0aoH8/70KKqsZtu9xEnlTvGKlAGBpo5sH 601rvszw/22GKOfKv8zAIb0C4K8p3IPHLJSu8zuEfnSr9LmY2Iq78rk4NXa5HVm8HtJEbqkn pRbEKSOhJRsijKwPD7XtrKtw5BsiddtfHKxN6kAgAsEKLY0Ft/7m/F06Zkfdn1FmrhBSkNtU WXIoB6xzgdKHQCK/qbQQXWyMqcyODORsnkz+LgRB96JgZ10vp338XTiwAjzZ2CYp2dnI5QM7 bQlbTME7IvxVeHXvZpZ2XIGuDe9kZxivcah2DyZki3YaIw/o6prGwnPYE3zHvpff9h9HlwnM TXN2ELofe/G2Dobbc/+WEX8qH4822PSEyHOhAr+AIKAJg5R98quORqsL65Jzp2omys1Y1s7p aeT/I/Nu2X3i56iSQ5hnjXBayRT+2pX1ty/vTHHFuQAAAAAAAA== --------------ms010102010001050009060703-- From owner-freebsd-stable@freebsd.org Tue May 7 18:51:04 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6276E159080F for ; Tue, 7 May 2019 18:51:04 +0000 (UTC) (envelope-from bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id B97656FB90 for ; Tue, 7 May 2019 18:51:03 +0000 (UTC) (envelope-from bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr) Received: by mailman.ysv.freebsd.org (Postfix) id 7A0A9159080E; Tue, 7 May 2019 18:51:03 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 24D02159080C for ; Tue, 7 May 2019 18:51:03 +0000 (UTC) (envelope-from bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr) Received: from bo.d.mailin.fr (bo.d.mailin.fr [185.41.28.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1734C6FB8A for ; Tue, 7 May 2019 18:51:01 +0000 (UTC) (envelope-from bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendinblue.com; q=dns/txt; s=mail; bh=5RWIU1XOZdCawP2csM6UEUpd4Ukdb7iLgKLMVTeTC0I=; h=from:reply-to:subject:date:mime-version:content-type:list-id:list-unsubscribe; b=d8L+mbmgcf4GFIX0lJxxEg2nAqjkv/De0xk/ozVTr4iDBck0KjWdM5Ensa9qsNz/bEp/+KyI9UJA TE39SbGDOCWsTP5KKSpcpMzddMaIeeMdjAqhCYqDQHCCiuv2/JcQQjRKN/dhbefsFJLD6hBwmLSa TFr6XNDEAGzRfwZPZrM= To: Subject: Join us for the Science World Fair! Date: Tue, 07 May 2019 18:51:01 +0000 Feedback-ID: 185.41.28.115:2211241_4:2211241:Sendinblue From: Science Championship List-Unsubscribe-Post: List-Unsubscribe=One-Click MIME-Version: 1.0 Message-Id: <201907051851.6ug90mcsmdjj@bo.d.mailin.fr> Precedence: bulk Reply-To: info@aceqb.com X-Mailer: Sendinblue X-Mailin-Campaign: 4 X-Mailin-Client: 2211241 X-Rspamd-Queue-Id: 1734C6FB8A X-Spamd-Bar: ++++ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=sendinblue.com header.s=mail header.b=d8L+mbmg; spf=pass (mx1.freebsd.org: domain of bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr designates 185.41.28.115 as permitted sender) smtp.mailfrom=bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr X-Spamd-Result: default: False [4.61 / 15.00]; HAS_REPLYTO(0.00)[info@aceqb.com]; XM_UA_NO_VERSION(0.01)[]; R_SPF_ALLOW(-0.20)[+ip4:185.41.28.0/22]; TO_DN_NONE(0.00)[]; URI_COUNT_ODD(1.00)[13]; DKIM_TRACE(0.00)[sendinblue.com:+]; MX_GOOD(-0.01)[cached: bo.d.mailin.fr]; NEURAL_HAM_SHORT(-0.48)[-0.479,0]; MAILLIST(-0.10)[generic]; SUBJECT_ENDS_EXCLAIM(0.00)[]; FORGED_SENDER(0.00)[science@aceqb.com,bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr]; RCVD_COUNT_ZERO(0.00)[0]; MIME_TRACE(0.00)[0:+,1:+]; IP_SCORE(-0.14)[ipnet: 185.41.28.0/22(0.26), asn: 200484(-0.93), country: FR(-0.01)]; ASN(0.00)[asn:200484, ipnet:185.41.28.0/22, country:FR]; FROM_NEQ_ENVFROM(0.00)[science@aceqb.com,bounces-6ug90mcsmdjj-stable=freebsd.org@bo.d.mailin.fr]; ARC_NA(0.00)[]; R_DKIM_ALLOW(-0.20)[sendinblue.com:s=mail]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; PRECEDENCE_BULK(0.00)[]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; DMARC_NA(0.00)[aceqb.com]; HTML_SHORT_LINK_IMG_1(2.00)[]; HAS_LIST_UNSUB(-0.01)[]; RCPT_COUNT_ONE(0.00)[1]; FORGED_SENDER_VERP_SRS(0.00)[]; MANY_INVISIBLE_PARTS(1.00)[10]; REPLYTO_DOM_EQ_FROM_DOM(0.00)[]; NEURAL_SPAM_MEDIUM(0.87)[0.871,0]; NEURAL_SPAM_LONG(0.97)[0.966,0]; ENVFROM_VERP(0.00)[]; RCVD_TLS_ALL(0.00)[]; FORGED_SENDER_MAILLIST(0.00)[] Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 18:51:04 -0000 If you are not able to see this mail, click http://3v5ld.r.ca.d.sendibm2.co= m/mk/mr/Qq58fHpDDQu7xvX5Snx-nNgrSzS1KV_sbczJLo-CTRJdQThueVEiKn-w00Nc7sYFFVw= Q8UArAX1cATFYRo_nON6ImYuWSVA-Eptb3LNngR15MrE[ ]( http://3v5ld.r.ca.d.sendi= bm2.com/mk/cl/f/0G69eWPRhbFgIN-2i_LGQY4_ks1JSRmYVk1ultUkKxu_S_J-yohU8_QIM0g= S23Ac1XwVQmhsK-by6zdzFtuSkhfS0SwWuqWBDZ7_G4xpVNaZ_QKkgMID5k66c9C126jp4lV7v4= 45D9DWs7Chuj2R519KnpaGuz5NPrYRw8RO ) COME COMPETE IN SCIENCE! [ ]( http://3v5ld.r.ca.d.sendibm2.com/mk/cl/f/MwTmj8avbP2PnoEjwym0VqtR0J5t= OLMjI2yBZf-lVRfNTQhxET6ub5svlJC08Yd5CgVqMyfGAlyuqTHrlmVMeLTKksEnNf7fhrWLl-i= LGlXSo-UXrK5SF839xu-kd-1wU8KryosgUB09IKGU8XVcb6J9oiUmx66W2YcR ) In an effort to include more students, preview the activity to more teacher= s, and more accurately name the SCIENCE WORLD CHAMPIONS, we are reaching ou= t to invite specific schools worldwide to participate! With 2019 being the initial competition year, there were several regions un= served or underserved by the qualifiers - including homeschools, internatio= nal schools, and regions affected by weather or logistics cancellations. Co= nsequently, we had many students worldwide who scored very well on the 2019= QUIZ, but did not get an opportunity to qualify. In an effort to welcome a= ll these various exclusions =E2=80=93 any student who did not have an oppor= tunity to participate in a 2019 science championship qualifier may attend t= he 2019 SCIENCE WORLD FAIR! Contact us for more details!=C2=A0 =C2=A0 [ www.scienceworldfair.org ]( http://3v5ld.r.ca.d.sendibm2.com/= mk/cl/f/zIl41JY75mNazrQpWJZIz5E3sVn3CdLkdNvXi7bJBgOXcFvsA2UCqsA2JJiVFtVRsjG= 5D70XUWGil0GqNdNMH5UP1y3eW-lYCb3LgoB26wqhcwq0vY4YPPV8kRz0XP2Ct1nCGaGV11cRdg= _oeS8oc4ThTCLdXsauZvU6N779VQ ) =C2=A0 =C2=A0 ACE QUIZBOWL PO Box 172501 Spartanburg, SC 29301 [ i ]( # )nfo@aceqb.com (864)-336-3235 =C2=A0 =C2=A0 =C2=A0 This email was sent to stable@freebsd.org =C2=A0 [ Unsubscribe here ]( http://3v5ld.r.ca.d.sendibm2.com/mk/un/BBiL7d9v0J_JBX= szpS_R12jbo3POyZVUd4UpSWWFXMMoasn19U9YMmMwuLOKOfUrDC9o7eG0GwjCMfLH3cShhfPP0= G4Cl1z316P50L4gcR2A1bgGfTuyVfgrKlY1dMEuTzzvWknI5oVDOA ) =C2=A0 Sent by [ ]( http://3v5ld.r.ca.d.sendibm2.com/mk/cl/f/rCKDGDY18pbuuIsaQSxr6KOlKDZE= Dxo7lRAsXglJaEOb5MEPnbg5DPOPYLSqtbnL-UdpRBnSktWjN2OrlNtE_ytoH7Bc6q2zYrFSy_1= XnhPFelxU3CXvrjbMMtibbe7_TTPXmr4pc0VHlnZKLXk7d7B0gUDvgAvKJrszlNzjudv-UDJjCU= 1osTqDyhXThon2LhnFDmtkHo7eoIyCjwVorJypuOxDcov2EKioSb-Bdo9wJ4PeIYLVRXVpshXk7= 896lb1pNqJjqaEe3u46cRjl29VQylKEAOwHvFwVpvi08w ) =C2=A0 =C2=A0 =C2=A9 2019 ACE =C2=A0 From owner-freebsd-stable@freebsd.org Tue May 7 20:23:09 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id F2D901592C49 for ; Tue, 7 May 2019 20:23:08 +0000 (UTC) (envelope-from matpockuh@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 66857730B4 for ; Tue, 7 May 2019 20:23:08 +0000 (UTC) (envelope-from matpockuh@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 26D301592C48; Tue, 7 May 2019 20:23:08 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0442E1592C47 for ; Tue, 7 May 2019 20:23:08 +0000 (UTC) (envelope-from matpockuh@gmail.com) Received: from mail-ot1-x342.google.com (mail-ot1-x342.google.com [IPv6:2607:f8b0:4864:20::342]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CB804730B2 for ; Tue, 7 May 2019 20:23:06 +0000 (UTC) (envelope-from matpockuh@gmail.com) Received: by mail-ot1-x342.google.com with SMTP id v17so8163245otp.13 for ; Tue, 07 May 2019 13:23:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gzpNFxkg1UswRBfSzu8QSr+IsyxE1eqIrGaEgC8LcjU=; b=s/U+puE4rB33KI/iBAKyo2Ag/syoObysMZ1g3QBMdAK+9eeDQaIl2xkGol2bHzVYgB c2c58YMCqjZa4qSCL42XwMGI8OcwsOo9hwhK2sTJ8hsYSvjeRwtzULQdwA+K9Hzd3JXL YVKhyGewWDkPINfD5G++tUTm8/ofPmCh/2u6qr4w3beFtCL979+bCLWDIkz49C1rxwNZ g2ka8OsMZxeybLFPzlx6viIlRRkTW+aViTsTUZQ83KU1S8zpHIhQDs+DJ4lku1oNk22X VU8E2lw2dGuIk0XNMJazsDua4I9rT0MAjT5Fe53qRF3Z5mzaHN8OIB4S6UParf+VIORb 9ong== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gzpNFxkg1UswRBfSzu8QSr+IsyxE1eqIrGaEgC8LcjU=; b=GAOIGVqIxFvOmwPqM0yvxwPZ6vK+TlKg8dWg+qLYO4FJgM6Eov4t/ui9OkhzVIEmCC tMxgZWW+svRvrLBdJ7XEt97EzSMuyrPMFlz6h1q7wBe9XKdw4bRbn2TqjVkrcedgHgq5 87IsMH15sG/F7ZZbJL2KlyLq2xOhpFB03fUbhMbUhRscTIbkVJoBKKhHTFTQcYgsigvF ygMPd7CMp2tkzlP4Wa2RfFtcI50cZXYQEPAA4akFmsVSlAUWCYNtThIRhiDj79OQE8yW 8kZwdL7/cW8xu2LjjKoIvBTv4JbA2G/3rdFN5dJOA979EA7ecLIiJdV5E5vcQh8+JCTB HwDQ== X-Gm-Message-State: APjAAAXGTLLR4ndcGEOMbSVxbNi3UR39ycyAEHkcWYvVjaIW6kSdckic ciTfYq0i71CyRnLUxqEny0XTKo8zT+aVIUrjbbc05YSyW0M= X-Google-Smtp-Source: APXvYqxIdTXeAfHvHut8Z3nqCAd4FaNYQ6QJXPu0FlZ1fRUxctqzqrYRfzfhxPzNQ6oQ3EkrRUBjUXg9exZeAKgsLFk= X-Received: by 2002:a9d:7f99:: with SMTP id t25mr23358197otp.303.1557260586067; Tue, 07 May 2019 13:23:06 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: KOT MATPOCKuH Date: Tue, 7 May 2019 23:23:22 +0300 Message-ID: Subject: Re: route based ipsec To: "Andrey V. Elsukov" Cc: stable@freebsd.org Content-Type: multipart/mixed; boundary="0000000000000ee3ad058851fc07" X-Rspamd-Queue-Id: CB804730B2 X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20161025 header.b=s/U+puE4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of matpockuh@gmail.com designates 2607:f8b0:4864:20::342 as permitted sender) smtp.mailfrom=matpockuh@gmail.com X-Spamd-Result: default: False [-4.54 / 15.00]; TO_DN_SOME(0.00)[]; FREEMAIL_FROM(0.00)[gmail.com]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; HAS_ATTACHMENT(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; FREEMAIL_TO(0.00)[yandex.ru]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; DWL_DNSWL_NONE(0.00)[gmail.com.dwl.dnswl.org : 127.0.5.0]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20161025]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_SHORT(-0.64)[-0.636,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/mixed,multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[stable@freebsd.org]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[2.4.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; IP_SCORE(-0.89)[ip: (1.07), ipnet: 2607:f8b0::/32(-3.22), asn: 15169(-2.26), country: US(-0.06)]; RCVD_COUNT_TWO(0.00)[2] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 20:23:09 -0000 --0000000000000ee3ad058851fc07 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello! =D0=B2=D1=81, 5 =D0=BC=D0=B0=D1=8F 2019 =D0=B3. =D0=B2 13:50, Andrey V. Els= ukov : > > 0.The ipsec-tools port currently does not have a maintainer (C) > portmaster > > ... Does this solution really supported? Or I should switch to use > another > > IKE daemon? > I think it is unmaintained in upstream too. > But why it still recommended in FreeBSD handbook? > 1. racoon was 3 times crashed with core dump (2 times on one host, 1 time= s > > on another host): > > (gdb) bt > > #0 0x000000000024417f in isakmp_info_recv () > > #1 0x00000000002345f4 in isakmp_main () > > #2 0x00000000002307d0 in isakmp_handler () > > #3 0x000000000022f10d in session () > > #4 0x000000000022e62a in main () > > > > 2. racoon generated 2 SA for each traffic direction (from hostA to > hostB). > > IMHO one SA for one each traffic direction should be enough. > > Probably you have something wrong in your configuration. > I'm misunderstand what in my configuration can result core dumps a running daemon... I'm attached a sample racoon.conf. Can You check for possible problems? Also on one host I got a crash in another function: (gdb) bt #0 0x000000000024717f in privsep_init () #1 0x00000000002375f4 in inscontacted () #2 0x00000000002337d0 in isakmp_plist_set_all () #3 0x000000000023210d in isakmp_ph2expire () #4 0x000000000023162a in isakmp_ph1delete () #5 0x000000000023110b in isakmp_ph2resend () #6 0x00000008002aa000 in ?? () #7 0x0000000000000000 in ?? () Note, that if_ipsec(4) interfaces has own security policies and you need > to check that racoon doesn't create additional policies. Also, > if_ipsec(4) uses "reqid" parameter to distinguish IPsec SAs between > interfaces. I made a patch to add special parameter for racoon, so it is > possible to use several if_ipsec(4) interfaces. I think it should be in > port. > https://lists.freebsd.org/pipermail/freebsd-net/2018-May/050509.html > This patch already applied to the ports tree. But it's not enough in my case :( > Also you can use strongswan, we use it for some time and have no problems= . > Okey. Thanks You! I will try to use strongswan. I'm tried to replace rsasig authentication with psk, but without luck. I'm against got two ipsec sa for each direction.... --=20 MATPOCKuH --0000000000000ee3ad058851fc07 Content-Type: application/octet-stream; name="racoon.conf" Content-Disposition: attachment; filename="racoon.conf" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_jve7650u0 cGF0aCBjZXJ0aWZpY2F0ZSAiL2V0Yy9zc2wvbmV3IjsKCiMgImxvZyIgc3BlY2lmaWVzIGxvZ2dp bmcgbGV2ZWwuIEl0IGlzIGZvbGxvd2VkIGJ5IGVpdGhlciAibm90aWZ5IiwgImRlYnVnIgojIG9y ICJkZWJ1ZzIiLgojbG9nIGRlYnVnOwoKIyAicGFkZGluZyIgZGVmaW5lcyBzb21lIHBhZGRpbmcg cGFyYW1ldGVycy4gWW91IHNob3VsZCBub3QgdG91Y2ggdGhlc2UuCnBhZGRpbmcgewoJbWF4aW11 bV9sZW5ndGgJMjA7CSMgbWF4aW11bSBwYWRkaW5nIGxlbmd0aC4KCXJhbmRvbWl6ZQlvZmY7CSMg ZW5hYmxlIHJhbmRvbWl6ZSBsZW5ndGguCglzdHJpY3RfY2hlY2sJb2ZmOwkjIGVuYWJsZSBzdHJp Y3QgY2hlY2suCglleGNsdXNpdmVfdGFpbAlvZmY7CSMgZXh0cmFjdCBsYXN0IG9uZSBvY3RldC4K fQoKbGlzdGVuCnsKCWlzYWttcAkJYWFhLmJiYi5jY2MuZGRkIFs1MDBdOwp9CgojIFNwZWNpZnkg dmFyaW91cyBkZWZhdWx0IHRpbWVycy4KdGltZXIgewoJIyBUaGVzZSB2YWx1ZSBjYW4gYmUgY2hh bmdlZCBwZXIgcmVtb3RlIG5vZGUuCgljb3VudGVyCQk1OwkJIyBtYXhpbXVtIHRyeWluZyBjb3Vu dCB0byBzZW5kLgoJaW50ZXJ2YWwJMjAgc2VjOwkJIyBtYXhpbXVtIGludGVydmFsIHRvIHJlc2Vu ZC4KCXBlcnNlbmQJCTE7CQkjIHRoZSBudW1iZXIgb2YgcGFja2V0cyBwZXIgc2VuZC4KCgkjIG1h eGltdW0gdGltZSB0byB3YWl0IGZvciBjb21wbGV0aW5nIGVhY2ggcGhhc2UuCglwaGFzZTEgMzAg c2VjOwoJcGhhc2UyIDE1IHNlYzsKfQoKcmVtb3RlIGFhYS5iYmIuY2NjLmRkZCBbNTAwXSB7Cgll eGNoYW5nZV9tb2RlCQltYWluOwoJZG9pCQkJaXBzZWNfZG9pOwoKCW15X2lkZW50aWZpZXIJCWFz bjFkbjsKCXBlZXJzX2lkZW50aWZpZXIJYXNuMWRuOwoJdmVyaWZ5X2lkZW50aWZpZXIJb247Cglj ZXJ0aWZpY2F0ZV90eXBlCXg1MDkgImhvc3QxLnJ1LmNydCIgImhvc3QxLnJ1LmtleSI7CgljYV90 eXBlCQkJeDUwOSAiY2EuY3J0IjsKCWRwZF9kZWxheQkJMTA7CgoJbGlmZXRpbWUgdGltZQkJMTIg aG91cjsgIyBzZWMsbWluLGhvdXIKCXBhc3NpdmUJCQlvZmY7Cglwcm9wb3NhbF9jaGVjawkJc3Ry aWN0OyAjIG9iZXksIHN0cmljdCwgb3IgY2xhaW0KCW5hdF90cmF2ZXJzYWwJCW9mZjsKCglwcm9w b3NhbCB7CgkJZW5jcnlwdGlvbl9hbGdvcml0aG0JYWVzIDI1NjsKCQloYXNoX2FsZ29yaXRobQkJ c2hhMjU2OwoJCWF1dGhlbnRpY2F0aW9uX21ldGhvZAlyc2FzaWc7CgkJbGlmZXRpbWUgdGltZQkJ MzAgc2VjOwoJCWRoX2dyb3VwCQkxNjsKCX0KfQoKcmVtb3RlIGFhYS5iYmIuY2NjLmRkZCBbNTAw XSB7CglleGNoYW5nZV9tb2RlCQltYWluOwoJZG9pCQkJaXBzZWNfZG9pOwoKCW15X2lkZW50aWZp ZXIJCWFzbjFkbjsKCXBlZXJzX2lkZW50aWZpZXIJYXNuMWRuOwoJdmVyaWZ5X2lkZW50aWZpZXIJ b247CgljZXJ0aWZpY2F0ZV90eXBlCXg1MDkgImhvc3QxLnJ1LmNydCIgImhvc3QxLnJ1LmtleSI7 CgljYV90eXBlCQkJeDUwOSAiY2EuY3J0IjsKCWRwZF9kZWxheQkJMTA7CgoJbGlmZXRpbWUgdGlt ZQkJMTIgaG91cjsgIyBzZWMsbWluLGhvdXIKCXBhc3NpdmUJCQlvZmY7Cglwcm9wb3NhbF9jaGVj awkJc3RyaWN0OyAjIG9iZXksIHN0cmljdCwgb3IgY2xhaW0KCW5hdF90cmF2ZXJzYWwJCW9mZjsK Cglwcm9wb3NhbCB7CgkJZW5jcnlwdGlvbl9hbGdvcml0aG0JYWVzIDI1NjsKCQloYXNoX2FsZ29y aXRobQkJc2hhMjU2OwoJCWF1dGhlbnRpY2F0aW9uX21ldGhvZAlyc2FzaWc7CgkJbGlmZXRpbWUg dGltZQkJMzAgc2VjOwoJCWRoX2dyb3VwCQkxNjsKCX0KfQoKcmVtb3RlIGFhYS5iYmIuY2NjLmRk ZCBbNTAwXSB7CglleGNoYW5nZV9tb2RlCQltYWluOwoJZG9pCQkJaXBzZWNfZG9pOwoKCW15X2lk ZW50aWZpZXIJCWFzbjFkbjsKCXBlZXJzX2lkZW50aWZpZXIJYXNuMWRuOwoJdmVyaWZ5X2lkZW50 aWZpZXIJb247CgljZXJ0aWZpY2F0ZV90eXBlCXg1MDkgImhvc3QxLnJ1LmNydCIgImhvc3QxLnJ1 LmtleSI7CgljYV90eXBlCQkJeDUwOSAiY2EuY3J0IjsKCWRwZF9kZWxheQkJMTA7CgoJbGlmZXRp bWUgdGltZQkJMTIgaG91cjsgIyBzZWMsbWluLGhvdXIKCXBhc3NpdmUJCQlvZmY7Cglwcm9wb3Nh bF9jaGVjawkJc3RyaWN0OyAjIG9iZXksIHN0cmljdCwgb3IgY2xhaW0KCW5hdF90cmF2ZXJzYWwJ CW9mZjsKCglwcm9wb3NhbCB7CgkJZW5jcnlwdGlvbl9hbGdvcml0aG0JYWVzIDI1NjsKCQloYXNo X2FsZ29yaXRobQkJc2hhMjU2OwoJCWF1dGhlbnRpY2F0aW9uX21ldGhvZAlyc2FzaWc7CgkJbGlm ZXRpbWUgdGltZQkJMzAgc2VjOwoJCWRoX2dyb3VwCQkxNjsKCX0KfQoKc2FpbmZvIGFub255bW91 cyB7CglwZnNfZ3JvdXAJCQkxNjsKCWxpZmV0aW1lIHRpbWUJCQkxMiBob3VyOwoJZW5jcnlwdGlv bl9hbGdvcml0aG0JCWFlcyAyNTY7CglhdXRoZW50aWNhdGlvbl9hbGdvcml0aG0JaG1hY19zaGEy NTY7Cgljb21wcmVzc2lvbl9hbGdvcml0aG0JCWRlZmxhdGU7Cn0K --0000000000000ee3ad058851fc07-- From owner-freebsd-stable@freebsd.org Wed May 8 00:26:16 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC7A41597F93 for ; Wed, 8 May 2019 00:26:16 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id BC76682D28 for ; Wed, 8 May 2019 00:26:07 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=UTF-8; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR5004TYTUNZT50@hades.sorbs.net> for freebsd-stable@freebsd.org; Tue, 07 May 2019 17:40:01 -0700 (PDT) Subject: Re: ZFS... To: Paul Mather Cc: freebsd-stable References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> From: Michelle Sullivan Message-id: <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> Date: Wed, 08 May 2019 10:25:57 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> X-Rspamd-Queue-Id: BC76682D28 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.17 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.96)[-0.965,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[battlestar.sorbs.net,anaconda.sorbs.net,ninja.sorbs.net,catapilla.sorbs.net,scorpion.sorbs.net,desperado.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.37)[ip: (-0.93), ipnet: 72.12.192.0/19(-0.48), asn: 11114(-0.38), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.57)[-0.572,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 00:26:16 -0000 Paul Mather wrote: > On May 7, 2019, at 1:02 AM, Michelle Sullivan wrote: > >>> On 07 May 2019, at 10:53, Paul Mather wrote: >>> >>>> On May 6, 2019, at 10:14 AM, Michelle Sullivan >>>> wrote: >>>> >>>> My issue here (and not really what the blog is about) FreeBSD is >>>> defaulting to it. >>> >>> You've said this at least twice now in this thread so I'm assuming >>> you're asserting it to be true. >>> >>> As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does >>> NOT default to ZFS. >>> >>> The images distributed by freebsd.org, e.g., Vagrant boxes, ARM >>> images, EC2 instances, etc., contain disk images where FreeBSD >>> resides on UFS. For example, here's what you end up with when you >>> launch a 12.0-RELEASE instance using defaults on AWS (us-east-1 >>> region: ami-03b0f822e17669866): >>> >>> root@freebsd:/usr/home/ec2-user # gpart show >>> => 3 20971509 ada0 GPT (10G) >>> 3 123 1 freebsd-boot (62K) >>> 126 20971386 2 freebsd-ufs (10G) >>> >>> And this is what you get when you "vagrant up" the >>> freebsd/FreeBSD-12.0-RELEASE box: >>> >>> root@freebsd:/home/vagrant # gpart show >>> => 3 65013755 ada0 GPT (31G) >>> 3 123 1 freebsd-boot (62K) >>> 126 2097152 2 freebsd-swap (1.0G) >>> 2097278 62914560 3 freebsd-ufs (30G) >>> 65011838 1920 - free - (960K) >>> >>> >>> When you install from the 12.0-RELEASE ISO, the first option listed >>> during the partitioning stage is "Auto (UFS) Guided Disk Setup". >>> The last option listed---after "Open a shell and partition by hand" >>> is "Auto (ZFS) Guided Root-on-ZFS". In other words, you have to >>> skip over UFS and manual partitioning to select the ZFS install option. >>> >>> So, I don't see what evidence there is that FreeBSD is defaulting to >>> ZFS. It hasn't up to now. Will FreeBSD 13 default to ZFS? >> >> Umm.. well I install by memory stick images and I had a 10.2 and an >> 11.0 both of which had root on zfs as the default.. I had to manually >> change them. I haven’t looked at anything later... so did something >> change? Am I in cloud cookoo land? > > > I don't know about that, but you may well be misremembering. I just > pulled down the 10.2 and 11.0 installers from > http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in > both cases the choices listed in the "Partitioning" step are the same > as in the current 12.0 installer: "Auto (UFS) Guided Disk Setup" is > listed first and selected by default. "Auto (ZFS) Guided Root-on-ZFS" > is listed last (you have to skip past other options such as manually > partitioning by hand to select it). > > I'm confident in saying that ZFS is (or was) not the default > partitioning option in either 10.2 or 11.0 as officially released by > FreeBSD. > > Did you use a custom installer you made yourself when installing 10.2 > or 11.0? it was an emergency USB stick.. so downloaded straight from the website. My process is boot, select "manual" (so I can set single partition and a swap partition as historically it's done other things) select the whole disk and create partition - this is where I saw it... 'freebsd-zfs' as the default. Second 'create' defaults to 'freebsd-swap' which is always correct. Interestingly the -CURRENT installer just says, "freebsd" and not either -ufs or -zfs ... what ever that defaults to I don't know. > > >> >>>> FreeBSD used to be targeted at enterprise and devs (which is where >>>> I found it)... however the last few years have been a big push into >>>> the consumer (compete with Linux) market.. so you have an OS that >>>> concerns itself with the desktop and upgrade after upgrade after >>>> upgrade (not just patching security issues, but upgrades as well.. >>>> just like windows and OSX)... I get it.. the money is in the >>>> keeping of the user base.. but then you install a file system which >>>> is dangerous on a single disk by default... dangerous because it’s >>>> trusted and “can’t fail” .. until it goes titsup.com and then the >>>> entire drive is lost and all the data on it.. it’s the double >>>> standard... advocate you need ECC ram, multiple vdevs etc, then >>>> single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh! >>> >>> >>> As people have pointed out elsewhere in this thread, it's false to >>> claim that ZFS is unsafe on consumer hardware. It's no less safe >>> than UFS on single-disk setups. >>> >>> Because anecdote is not evidence, I will refrain from saying, "I've >>> lost far more data on UFS than I have on ZFS (especially when SUJ >>> was shaking out its bugs)..." >;-) >>> >>> What I will agree with is that, probably due to its relative youth, >>> ZFS has less forensics/data recovery tools than UFS. I'm sure this >>> will improve as time goes on. (I even posted a link to an article >>> describing someone adding ZFS support to a forensics toolkit earlier >>> in this thread.) >> >> The problem I see with that statement is that the zfs dev mailing >> lists constantly and consistently following the line of, the data is >> always right there is no need for a “fsck” (which I actually get) but >> it’s used to shut down every thread... the irony is I’m now >> installing windows 7 and SP1 on a usb stick (well it’s actually >> installed, but sp1 isn’t finished yet) so I can install a zfs data >> recovery tool which reports to be able to “walk the data” to retrieve >> all the files... the irony eh... install windows7 on a usb stick to >> recover a FreeBSD installed zfs filesystem... will let you know if >> the tool works, but as it was recommended by a dev I’m hopeful... >> have another array (with zfs I might add) loaded and ready to go... >> if the data recovery is successful I’ll blow away the original >> machine and work out what OS and drive setup will be safe for the >> data in the future. I might even put FreeBSD and zfs back on it, but >> if I do it won’t be in the current Zraid2 config. > > > There is no more irony in installing a data recovery tool to recover a > trashed ZFS pool than there is in installing one to recover a trashed > UFS file system. No file system is bulletproof, which is why everyone > I know recommends a backup/disaster recovery strategy commensurate > with the value you place on your data. There WILL be some combination > of events that will lead to irretrievable data loss. Your > extraordinary sequence of mishaps apparently met the threshold for ZFS > on your setup. > > I don't see how any of this leads to the conclusion that ZFS is > "dangerous" to use as a file system. For me 'dangerous' threshold is when it comes to 'all or nothing'. UFS - even when trashed (and I might add I've never had it completely trashed on a production image) there are tools to recover what is left of the data. There are no such tools for zfs (barring the one I'm about to test - which will be interesting to see if it works... but even then, installing windows to recover freebsd :D ) > What I believe is dangerous is relying on a post-mortem crash data > recovery methodology as a substitute for a backup strategy for data > that, in hindsight, is considered important enough to keep. No matter > how resilient ZFS or UFS may be, they are no substitute for backups > when it comes to data you care about. (File system resiliency will > not protect you, e.g., from Ransomware or other malicious or > accidental acts of data destruction.) True, but nothing is perfect, even backups (how many times have we seen or heard of stories when Backups didn't actually work - and the problem was only identified when trying to recover from a problem?) My situation has been made worse by the fact I was reorganising everything when it went down - so my backups (of the important stuff) were not there and that was a direct consequence of me throwing caution to the wind years before and stopping keeping the full mirror of the data... due to lack of space. Interestingly have had another drive die in the array - and it doesn't just have one or two sectors down it has a *lot* - which was not noticed by the original machine - I moved the drive to a byte copier which is where it's reporting 100's of sectors damaged... could this be compounded by zfs/mfi driver/hba not picking up errors like it should? Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Wed May 8 01:01:24 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 65D051598C28 for ; Wed, 8 May 2019 01:01:24 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 5542784323 for ; Wed, 8 May 2019 01:01:23 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=UTF-8; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR50014SVHKZI00@hades.sorbs.net> for freebsd-stable@freebsd.org; Tue, 07 May 2019 18:15:23 -0700 (PDT) Subject: Re: ZFS... To: Karl Denninger , freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> From: Michelle Sullivan Message-id: Date: Wed, 08 May 2019 11:01:19 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: Content-transfer-encoding: quoted-printable X-Rspamd-Queue-Id: 5542784323 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.88 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.972,0]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; NEURAL_SPAM_SHORT(0.21)[0.211,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.36)[ip: (-0.91), ipnet: 72.12.192.0/19(-0.47), asn: 11114(-0.37), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 01:01:24 -0000 Karl Denninger wrote: > On 5/7/2019 00:02, Michelle Sullivan wrote: >> The problem I see with that statement is that the zfs dev mailing list= s constantly and consistently following the line of, the data is always r= ight there is no need for a =E2=80=9Cfsck=E2=80=9D (which I actually get)= but it=E2=80=99s used to shut down every thread... the irony is I=E2=80=99= m now installing windows 7 and SP1 on a usb stick (well it=E2=80=99s actu= ally installed, but sp1 isn=E2=80=99t finished yet) so I can install a zf= s data recovery tool which reports to be able to =E2=80=9Cwalk the data=E2= =80=9D to retrieve all the files... the irony eh... install windows7 on = a usb stick to recover a FreeBSD installed zfs filesystem... will let yo= u know if the tool works, but as it was recommended by a dev I=E2=80=99m = hopeful... have another array (with zfs I might add) loaded and ready to = go... if the data recovery is successful I=E2=80=99ll blow away the origi= nal machine and work out what OS and drive setup will be safe for the dat= a in the future. I might even put FreeBSD and zfs back on it, but if I d= o it won=E2=80=99t be in the current Zraid2 config. > Meh. > > Hardware failure is, well, hardware failure. Yes, power-related > failures are hardware failures. > > Never mind the potential for /software /failures. Bugs are, well, > bugs. And they're a real thing. Never had the shortcomings of UFS bit= e > you on an "unexpected" power loss? Well, I have. Is ZFS absolutely > safe against any such event? No, but it's safe*r*. Yes and no ... I'll explain... > > I've yet to have ZFS lose an entire pool due to something bad happening= , > but the same basic risk (entire filesystem being gone) Everytime I have seen this issue (and it's been more than once - though=20 until now recoverable - even if extremely painful) - its always been=20 during a resilver of a failed drive and something happening... panic,=20 another drive failure, power etc.. any other time its rock solid...=20 which is the yes and no... under normal circumstances zfs is very very=20 good and seems as safe as or safer than UFS... but my experience is ZFS=20 has one really bad flaw.. if there is a corruption in the metadata -=20 even if the stored data is 100% correct - it will fault the pool and=20 thats it it's gone barring some luck and painful recovery (backups=20 aside) ... this other file systems also suffer but there are tools that=20 *majority of the time* will get you out of the s**t with little pain. =20 Barring this windows based tool I haven't been able to run yet, zfs=20 appears to have nothing. > has occurred more > than once in my IT career with other filesystems -- including UFS, lowl= y > MSDOS and NTFS, never mind their predecessors all the way back to flopp= y > disks and the first 5Mb Winchesters. Absolutely, been there done that.. and btrfs...*ouch* still as bad..=20 however with the only one btrfs install I had (I didn't knopw it was=20 btrfs underneath, but netgear NAS...) I was still able to recover the=20 data even though it had screwed the file system so bad I vowed never to=20 consider or use it again on anything ever... > > I learned a long time ago that two is one and one is none when it comes= > to data, and WHEN two becomes one you SWEAT, because that second failur= e > CAN happen at the worst possible time. and does.. > > As for RaidZ2 .vs. mirrored it's not as simple as you might think. > Mirrored vdevs can only lose one member per mirror set, unless you use > three-member mirrors. That sounds insane but actually it isn't in > certain circumstances, such as very-read-heavy and high-performance-rea= d > environments. I know - this is why I don't use mirrored - because wear patterns will=20 ensure both sides of the mirror are closely matched. > > The short answer is that a 2-way mirrored set is materially faster on > reads but has no acceleration on writes, and can lose one member per > mirror. If the SECOND one fails before you can resilver, and that > resilver takes quite a long while if the disks are large, you're dead. > However, if you do six drives as a 2x3 way mirror (that is, 3 vdevs eac= h > of a 2-way mirror) you now have three parallel data paths going at once= > and potentially six for reads -- and performance is MUCH better. A > 3-way mirror can lose two members (and could be organized as 3x2) but > obviously requires lots of drive slots, 3x as much *power* per gigabyte= > stored (and you pay for power twice; once to buy it and again to get th= e > heat out of the room where the machine is.) my problem (as always) is slots not so much the power. > > Raidz2 can also lose 2 drives without being dead. However, it doesn't > get any of the read performance improvement *and* takes a write > performance penalty; Z2 has more write penalty than Z1 since it has to > compute and write two parity entries instead of one, although in theory= > at least it can parallel those parity writes -- albeit at the cost of > drive bandwidth congestion (e.g. interfering with other accesses to the= > same disk at the same time.) In short RaidZx performs about as "well" > as the *slowest* disk in the set. Which is why I built mine with identical drives (though different=20 production batches :) ) ... majority of the data in my storage array is=20 write once (or twice) read many. > So why use it (particularly Z2) at > all? Because for "N" drives you get the protection of a 3-way mirror > and *much* more storage. A six-member RaidZ2 setup returns ~4Tb of > usable space, where with a 2-way mirror it returns 3Tb and a 3-way > mirror (which provides the same protection against drive failure as Z2)= > you have only *half* the storage. IMHO ordinary Raidz isn't worth the > trade-offs, but Z2 frequently is. > > In addition more spindles means more failures, all other things being > equal, so if you need "X" TB of storage and organize it as 3-way mirror= s > you now have twice as many physical spindles which means on average > you'll take twice as many faults. If performance is more important the= n > the choice is obvious. If density is more important (that is, a lot or= > even most of the data is rarely accessed at all) then the choice is > fairly simple too. In many workloads you have some of both, and thus > the correct choice is a hybrid arrangement; that's what I do here, > because I have a lot of data that is rarely-to-never accessed and > read-only but also have some data that is frequently accessed and > frequently written. One size does not fit all in such a workload. This is where I came to 2 systems (with different data) .. one was for=20 density, the other performance. Storage vs working etc.. > MOST systems, by the way, have this sort of paradigm (a huge percentage= > of the data is rarely read and never written) but it doesn't become > economic or sane to try to separate them until you get well into the > terabytes of storage range and a half-dozen or so physical volumes. > There's a very clean argument that prior to that point but with greate= r > than one drive mirrored is always the better choice. > > Note that if you have an *adapter* go insane (and as I've noted here > I've had it happen TWICE in my IT career!) then *all* of the data on th= e > disks served by that adapter is screwed. 100% with you - been there done that... and it doesn't matter what os or = filesystem, hardware failure where silent data corruption happens=20 because of an adapter will always take you out (and zfs will not save=20 you in many cases of that either.) > > It doesn't make a bit of difference what filesystem you're using in tha= t > scenario and thus you had better have a backup scheme and make sure it > works as well, never mind software bugs or administrator stupidity ("dd= " > as root to the wrong target, for example, will reliably screw you every= > single time!) > > For a single-disk machine ZFS is no *less* safe than UFS and provides a= > number of advantages, with arguably the most-important being easily-use= d > snapshots. Depends in normal operating I agree... but when it comes to all or=20 nothing, that is a matter of perspective. Personally I prefer to have=20 in place recovery options and/or multiple *possible* recovery options=20 rather than ... "destroy the pool and recreate it from scratch, hope you = have backups"... > Not only does this simplify backups since coherency during > the backup is never at issue and incremental backups become fast and > easily-done in addition boot environments make roll-forward and even > *roll-back* reasonable to implement for software updates -- a critical > capability if you ever run an OS version update and something goes > seriously wrong with it. If you've never had that happen then consider= > yourself blessed; I have been there (especially in the early days (pre 0.83 kernel)=20 versions of Linux :) ) > it's NOT fun to manage in a UFS environment and often > winds up leading to a "restore from backup" scenario. (To be fair it > can be with ZFS too if you're foolish enough to upgrade the pool before= > being sure you're happy with the new OS rev.) > Actually I have a simple way with UFS (and ext2/3/4 etc) ... split the=20 boot disk almost down the center.. create 3 partitions.. root, swap,=20 altroot. root and altroot are almost identical, one is always active,=20 new OS goes on the other, switch to make the other active (primary) when = you've tested... it's only gives one level of roll forward/roll back,=20 but it works for me and has never failed (boot disk/OS wise) since I=20 implemented it... but then I don't let anyone else in the company have=20 root access so they cannot dd or "rm -r . /" or "rm -r .*" (both of=20 which are the only way I have done that before - back in 1994 and never=20 done it since - its something you learn or get out of IT :P .. and for=20 those who didn't get the latter it should have been 'rm -r .??*' - and=20 why are you on '-stable' ...? :P ) Regards, --=20 Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Wed May 8 01:12:54 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2C5981599004 for ; Wed, 8 May 2019 01:12:54 +0000 (UTC) (envelope-from jmaloney@ixsystems.com) Received: from mail-yw1-xc43.google.com (mail-yw1-xc43.google.com [IPv6:2607:f8b0:4864:20::c43]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 00F2784971 for ; Wed, 8 May 2019 01:12:53 +0000 (UTC) (envelope-from jmaloney@ixsystems.com) Received: by mail-yw1-xc43.google.com with SMTP id q185so14875264ywe.3 for ; Tue, 07 May 2019 18:12:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ixsystems-com.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=r7S9TyG0/3sGFwSlfBsNpByx3fZ13UMebv2SX7KurEg=; b=at+zgSBkJy7lmselLOXiHrIEq6fAX3zG9uo8NBbe8Rn8ZATajnJemiN9uGjSv8TbWj DrrSsBuBVS8O+jl4PWPj4AkkNfsof5NPN5OfTMt0XzhGiCpw3sBfoPMbpg22m/cfporF k6a9JDHm7toWoy7hbRGQ95TdS8hYkugX1Ef3D1xWyaKrXcS9sogSwO8EdaQ93noKUr0a EA6y1KV+ZO9bDBWEUPjvPuBKlxWAsOFF8bTsdXpM56k1kdSQO71jIeVLKjagdzxkRGAN /cttkyT1mwTx/ZK0yQEN2x5O/0Rirwye1uww1YJLRG5NnIYFi4NxzI3bKY8Xc6CXeXFm oEpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=r7S9TyG0/3sGFwSlfBsNpByx3fZ13UMebv2SX7KurEg=; b=GxrmmhTs43Yz82Q1iJ6KQgTNu3oln/dRMMUsUzVeiv5jI3v9t0y15aqRU4thpbhJ43 zXNQjXjoC+abdqPVAC5VeJ0WRjJG8vIc6CuGOH9XdVOVTyW37VwzqzAU5a8e/LcOse7I Ll7Dxq5rZ+t4vpbvQs/mlXgZon7guxP8gva8pthCyVJvpT7Nzi0KtmYjL2hM8KBEdlh3 1s5DvwfumCueI4FcTuCa8oOE5D6wQU2qgATLrZ5LJ1yOOQyHkMtXwPhWUNXR9XgpalG3 Yq5At0D0l2eeOnwBIyhtvHF9x4kVzrviFuRd7O2V8dRkGt0GjQdPKyxP55Bm1B1+EAwJ BLRA== X-Gm-Message-State: APjAAAVu7VoFsJwegVs/Px7TEiS7w3RlGSP0LCNFuATPsXYz7+tlZNLg sh1Y3A7gk3rvsEaK66fLome+Tw== X-Google-Smtp-Source: APXvYqykbhyRJTMPg2X1qCxbze7KeCvPjRznDrcy0Fg2mCXVuy99zu3F+VmTBTAfD4KCEl+4JMIpwA== X-Received: by 2002:a81:99d0:: with SMTP id q199mr14154630ywg.154.1557277971997; Tue, 07 May 2019 18:12:51 -0700 (PDT) Received: from ?IPv6:2600:1700:3580:6630:ca4:cc88:c0c8:f0bc? ([2600:1700:3580:6630:ca4:cc88:c0c8:f0bc]) by smtp.gmail.com with ESMTPSA id z204sm4153234ywb.28.2019.05.07.18.12.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 07 May 2019 18:12:50 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (1.0) Subject: Re: ZFS... From: Joe Maloney X-Mailer: iPhone Mail (16E227) In-Reply-To: Date: Tue, 7 May 2019 21:12:50 -0400 Cc: Karl Denninger , freebsd-stable@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Michelle Sullivan X-Rspamd-Queue-Id: 00F2784971 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=ixsystems-com.20150623.gappssmtp.com header.s=20150623 header.b=at+zgSBk; dmarc=pass (policy=none) header.from=ixsystems.com; spf=pass (mx1.freebsd.org: domain of jmaloney@ixsystems.com designates 2607:f8b0:4864:20::c43 as permitted sender) smtp.mailfrom=jmaloney@ixsystems.com X-Spamd-Result: default: False [-3.61 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[ixsystems-com.20150623.gappssmtp.com:s=20150623]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; DKIM_TRACE(0.00)[ixsystems-com.20150623.gappssmtp.com:+]; MX_GOOD(-0.01)[cached: ALT3.ASPMX.L.GOOGLE.com]; DMARC_POLICY_ALLOW(-0.50)[ixsystems.com,none]; SUBJ_ALL_CAPS(0.45)[6]; RCVD_IN_DNSWL_NONE(0.00)[3.4.c.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; NEURAL_HAM_SHORT(-0.78)[-0.785,0]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_LAST(0.00)[]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; MID_RHS_MATCH_FROM(0.00)[]; IP_SCORE(-0.77)[ip: (1.69), ipnet: 2607:f8b0::/32(-3.22), asn: 15169(-2.26), country: US(-0.06)] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 01:12:54 -0000 You might look at UFS Explorer. It claims to have ZFS support now. It cost= s money for a license and I think required windows last I used it. I can at= test that a previous version allowed me to recover all the data I needed fro= m a lost UFS mirror almost a decade ago. Sent from my iPhone > On May 7, 2019, at 9:01 PM, Michelle Sullivan wrote: >=20 > Karl Denninger wrote: >>> On 5/7/2019 00:02, Michelle Sullivan wrote: >>> The problem I see with that statement is that the zfs dev mailing lists c= onstantly and consistently following the line of, the data is always right t= here is no need for a =E2=80=9Cfsck=E2=80=9D (which I actually get) but it=E2= =80=99s used to shut down every thread... the irony is I=E2=80=99m now insta= lling windows 7 and SP1 on a usb stick (well it=E2=80=99s actually installed= , but sp1 isn=E2=80=99t finished yet) so I can install a zfs data recovery t= ool which reports to be able to =E2=80=9Cwalk the data=E2=80=9D to retrieve a= ll the files... the irony eh... install windows7 on a usb stick to recover a= FreeBSD installed zfs filesystem... will let you know if the tool works, b= ut as it was recommended by a dev I=E2=80=99m hopeful... have another array (= with zfs I might add) loaded and ready to go... if the data recovery is succ= essful I=E2=80=99ll blow away the original machine and work out what OS and d= rive setup will be safe for the data in the future. I might even put FreeBS= D and zfs back on it, but if I do it won=E2=80=99t be in the current Zraid2 c= onfig. >> Meh. >>=20 >> Hardware failure is, well, hardware failure. Yes, power-related >> failures are hardware failures. >>=20 >> Never mind the potential for /software /failures. Bugs are, well, >> bugs. And they're a real thing. Never had the shortcomings of UFS bite >> you on an "unexpected" power loss? Well, I have. Is ZFS absolutely >> safe against any such event? No, but it's safe*r*. >=20 > Yes and no ... I'll explain... >=20 >>=20 >> I've yet to have ZFS lose an entire pool due to something bad happening, >> but the same basic risk (entire filesystem being gone) >=20 > Everytime I have seen this issue (and it's been more than once - though un= til now recoverable - even if extremely painful) - its always been during a r= esilver of a failed drive and something happening... panic, another drive fa= ilure, power etc.. any other time its rock solid... which is the yes and no.= .. under normal circumstances zfs is very very good and seems as safe as or s= afer than UFS... but my experience is ZFS has one really bad flaw.. if there= is a corruption in the metadata - even if the stored data is 100% correct -= it will fault the pool and thats it it's gone barring some luck and painful= recovery (backups aside) ... this other file systems also suffer but there a= re tools that *majority of the time* will get you out of the s**t with littl= e pain. Barring this windows based tool I haven't been able to run yet, zfs= appears to have nothing. >=20 >> has occurred more >> than once in my IT career with other filesystems -- including UFS, lowly >> MSDOS and NTFS, never mind their predecessors all the way back to floppy >> disks and the first 5Mb Winchesters. >=20 > Absolutely, been there done that.. and btrfs...*ouch* still as bad.. howev= er with the only one btrfs install I had (I didn't knopw it was btrfs undern= eath, but netgear NAS...) I was still able to recover the data even though i= t had screwed the file system so bad I vowed never to consider or use it aga= in on anything ever... >=20 >>=20 >> I learned a long time ago that two is one and one is none when it comes >> to data, and WHEN two becomes one you SWEAT, because that second failure >> CAN happen at the worst possible time. >=20 > and does.. >=20 >>=20 >> As for RaidZ2 .vs. mirrored it's not as simple as you might think. >> Mirrored vdevs can only lose one member per mirror set, unless you use >> three-member mirrors. That sounds insane but actually it isn't in >> certain circumstances, such as very-read-heavy and high-performance-read >> environments. >=20 > I know - this is why I don't use mirrored - because wear patterns will ens= ure both sides of the mirror are closely matched. >=20 >>=20 >> The short answer is that a 2-way mirrored set is materially faster on >> reads but has no acceleration on writes, and can lose one member per >> mirror. If the SECOND one fails before you can resilver, and that >> resilver takes quite a long while if the disks are large, you're dead. >> However, if you do six drives as a 2x3 way mirror (that is, 3 vdevs each >> of a 2-way mirror) you now have three parallel data paths going at once >> and potentially six for reads -- and performance is MUCH better. A >> 3-way mirror can lose two members (and could be organized as 3x2) but >> obviously requires lots of drive slots, 3x as much *power* per gigabyte >> stored (and you pay for power twice; once to buy it and again to get the >> heat out of the room where the machine is.) >=20 > my problem (as always) is slots not so much the power. >=20 >>=20 >> Raidz2 can also lose 2 drives without being dead. However, it doesn't >> get any of the read performance improvement *and* takes a write >> performance penalty; Z2 has more write penalty than Z1 since it has to >> compute and write two parity entries instead of one, although in theory >> at least it can parallel those parity writes -- albeit at the cost of >> drive bandwidth congestion (e.g. interfering with other accesses to the >> same disk at the same time.) In short RaidZx performs about as "well" >> as the *slowest* disk in the set. > Which is why I built mine with identical drives (though different producti= on batches :) ) ... majority of the data in my storage array is write once (= or twice) read many. >=20 >> So why use it (particularly Z2) at >> all? Because for "N" drives you get the protection of a 3-way mirror >> and *much* more storage. A six-member RaidZ2 setup returns ~4Tb of >> usable space, where with a 2-way mirror it returns 3Tb and a 3-way >> mirror (which provides the same protection against drive failure as Z2) >> you have only *half* the storage. IMHO ordinary Raidz isn't worth the >> trade-offs, but Z2 frequently is. >>=20 >> In addition more spindles means more failures, all other things being >> equal, so if you need "X" TB of storage and organize it as 3-way mirrors >> you now have twice as many physical spindles which means on average >> you'll take twice as many faults. If performance is more important then >> the choice is obvious. If density is more important (that is, a lot or >> even most of the data is rarely accessed at all) then the choice is >> fairly simple too. In many workloads you have some of both, and thus >> the correct choice is a hybrid arrangement; that's what I do here, >> because I have a lot of data that is rarely-to-never accessed and >> read-only but also have some data that is frequently accessed and >> frequently written. One size does not fit all in such a workload. > This is where I came to 2 systems (with different data) .. one was for den= sity, the other performance. Storage vs working etc.. >=20 >> MOST systems, by the way, have this sort of paradigm (a huge percentage >> of the data is rarely read and never written) but it doesn't become >> economic or sane to try to separate them until you get well into the >> terabytes of storage range and a half-dozen or so physical volumes. >> There's a very clean argument that prior to that point but with greater >> than one drive mirrored is always the better choice. >>=20 >> Note that if you have an *adapter* go insane (and as I've noted here >> I've had it happen TWICE in my IT career!) then *all* of the data on the >> disks served by that adapter is screwed. >=20 > 100% with you - been there done that... and it doesn't matter what os or f= ilesystem, hardware failure where silent data corruption happens because of a= n adapter will always take you out (and zfs will not save you in many cases o= f that either.) >>=20 >> It doesn't make a bit of difference what filesystem you're using in that >> scenario and thus you had better have a backup scheme and make sure it >> works as well, never mind software bugs or administrator stupidity ("dd" >> as root to the wrong target, for example, will reliably screw you every >> single time!) >>=20 >> For a single-disk machine ZFS is no *less* safe than UFS and provides a >> number of advantages, with arguably the most-important being easily-used >> snapshots. >=20 > Depends in normal operating I agree... but when it comes to all or nothing= , that is a matter of perspective. Personally I prefer to have in place rec= overy options and/or multiple *possible* recovery options rather than ... "d= estroy the pool and recreate it from scratch, hope you have backups"... >=20 >> Not only does this simplify backups since coherency during >> the backup is never at issue and incremental backups become fast and >> easily-done in addition boot environments make roll-forward and even >> *roll-back* reasonable to implement for software updates -- a critical >> capability if you ever run an OS version update and something goes >> seriously wrong with it. If you've never had that happen then consider >> yourself blessed; >=20 > I have been there (especially in the early days (pre 0.83 kernel) versions= of Linux :) ) >=20 >> it's NOT fun to manage in a UFS environment and often >> winds up leading to a "restore from backup" scenario. (To be fair it >> can be with ZFS too if you're foolish enough to upgrade the pool before >> being sure you're happy with the new OS rev.) >>=20 > Actually I have a simple way with UFS (and ext2/3/4 etc) ... split the boo= t disk almost down the center.. create 3 partitions.. root, swap, altroot. r= oot and altroot are almost identical, one is always active, new OS goes on t= he other, switch to make the other active (primary) when you've tested... it= 's only gives one level of roll forward/roll back, but it works for me and h= as never failed (boot disk/OS wise) since I implemented it... but then I don= 't let anyone else in the company have root access so they cannot dd or "rm -= r . /" or "rm -r .*" (both of which are the only way I have done that before= - back in 1994 and never done it since - its something you learn or get out= of IT :P .. and for those who didn't get the latter it should have been 'rm= -r .??*' - and why are you on '-stable' ...? :P ) >=20 > Regards, >=20 > --=20 > Michelle Sullivan > http://www.mhix.org/ >=20 >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Wed May 8 01:29:58 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8F3CE1599805 for ; Wed, 8 May 2019 01:29:58 +0000 (UTC) (envelope-from eugen@grosbein.net) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 1943F852B4 for ; Wed, 8 May 2019 01:29:58 +0000 (UTC) (envelope-from eugen@grosbein.net) Received: by mailman.ysv.freebsd.org (Postfix) id CA9A11599804; Wed, 8 May 2019 01:29:57 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B83051599803 for ; Wed, 8 May 2019 01:29:57 +0000 (UTC) (envelope-from eugen@grosbein.net) Received: from eg.sd.rdtc.ru (eg.sd.rdtc.ru [IPv6:2a03:3100:c:13::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "eg.sd.rdtc.ru", Issuer "eg.sd.rdtc.ru" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 164F5852B3 for ; Wed, 8 May 2019 01:29:56 +0000 (UTC) (envelope-from eugen@grosbein.net) X-Envelope-From: eugen@grosbein.net X-Envelope-To: stable@freebsd.org Received: from [10.58.0.4] ([10.58.0.4]) by eg.sd.rdtc.ru (8.15.2/8.15.2) with ESMTPS id x481TkZm090936 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NOT); Wed, 8 May 2019 08:29:46 +0700 (+07) (envelope-from eugen@grosbein.net) Subject: Re: route based ipsec To: KOT MATPOCKuH , "Andrey V. Elsukov" References: Cc: stable@freebsd.org From: Eugene Grosbein Message-ID: <83f4e225-b767-72ee-43df-52163271ce8e@grosbein.net> Date: Wed, 8 May 2019 08:29:38 +0700 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 164F5852B3 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.98 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; REPLY(-4.00)[]; NEURAL_HAM_SHORT(-0.98)[-0.983,0] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 01:29:58 -0000 08.05.2019 3:23, KOT MATPOCKuH wrote: > I'm misunderstand what in my configuration can result core dumps a running > daemon... > I'm attached a sample racoon.conf. Can You check for possible problems? > Also on one host I got a crash in another function: > (gdb) bt > #0 0x000000000024717f in privsep_init () > #1 0x00000000002375f4 in inscontacted () > #2 0x00000000002337d0 in isakmp_plist_set_all () > #3 0x000000000023210d in isakmp_ph2expire () > #4 0x000000000023162a in isakmp_ph1delete () > #5 0x000000000023110b in isakmp_ph2resend () > #6 0x00000008002aa000 in ?? () > #7 0x0000000000000000 in ?? () I guess configuration using certificates is not tested enough. It works stable for me but I use psk only. You need to fix code yourself or stop using racoon with certificates. From owner-freebsd-stable@freebsd.org Wed May 8 03:10:06 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0BAF6159B569 for ; Wed, 8 May 2019 03:10:06 +0000 (UTC) (envelope-from walterp@gmail.com) Received: from mail-it1-x135.google.com (mail-it1-x135.google.com [IPv6:2607:f8b0:4864:20::135]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CD2CF88447 for ; Wed, 8 May 2019 03:10:03 +0000 (UTC) (envelope-from walterp@gmail.com) Received: by mail-it1-x135.google.com with SMTP id l7so1665975ite.2 for ; Tue, 07 May 2019 20:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=xduG+Xx+T06w6jIF/DSEtioJ1dQz9lF9zXiBKSp4yd4=; b=gxKG9YpILdMKIwTdNsTb2tkhylUOWhwZUetrvLFAnOPRg6GIKEy+AJil81GM6+AXln 9sO+/RFyor1qypv6GPw5qwtkiAK5GXzWcfLA3eJ7fYVyjrcizu/GWHccNfJIXT+r+C+s rUH7NoPfIagpHsHHm15EoGGonTQbH37FRFS715z6QGvqFuZyFpuVmcjmkSNKV7/lxHxJ W6892J3wIZnzmXrbNo4M/sPt0pcFaiVDzNMoLxJNQCO5RPTDnSwPDR59lndWVakpv+XS bDMoWjvAAwhLqyJqOIp3RtII2MngBN8HX3y+xFr6H1wCWgURoyBk+DoRYpFud717xGUc 0tPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=xduG+Xx+T06w6jIF/DSEtioJ1dQz9lF9zXiBKSp4yd4=; b=c1CC6JIGJiwsV/ppQGWNbMFfM1rqbD9y3iUOwppX6lt9NsSjqXLxk8GrnCYOJJd1ht GwI749+v34j4vkGeUkfEXdVP+8UdnoEUOjV0W9LcIotRe9Hp3y4JkqakqqP8efegtlri 8uuY0U58qgRsrqYNL2L6qq3q+VvpJ1YHntcVRWhoAXJCIZDbXA3X8gm0GepZ97VrFNvV 1J9OjI5YO6N+KTYNG3dbY4xqXZ8WuQ/gAPFGAqltO1x1qMZDg8yAlN9XFd20K/k85BtL phd/sCZ9fY8ULWoNYrOG2c3twV0GrGLtdKlKW+f33JE9Ecnk+Fua7U8Rg82UPcFZQzqa gUmg== X-Gm-Message-State: APjAAAVJRqeEiJfFwsF6vsUHgFof3IeqBjCfBjO1DmHlKdPPNrcDw5Jo 89rdIRaVqUECIT2O3h8/5Eu90SRfnxMb5DCBpwgLup/q X-Google-Smtp-Source: APXvYqzRKa9fBbFg6rAxUXKZV+MYD7Wq1ARCIlh6ZhEZf+GwLZuFQVhkLigewrECJ1OZxjQNJK2NExcOlXBlflTHELk= X-Received: by 2002:a02:b88b:: with SMTP id p11mr25567496jam.82.1557285002656; Tue, 07 May 2019 20:10:02 -0700 (PDT) MIME-Version: 1.0 References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> In-Reply-To: From: Walter Parker Date: Tue, 7 May 2019 20:09:49 -0700 Message-ID: Subject: Re: ZFS... To: freebsd-stable@freebsd.org X-Rspamd-Queue-Id: CD2CF88447 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20161025 header.b=gxKG9YpI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of walterp@gmail.com designates 2607:f8b0:4864:20::135 as permitted sender) smtp.mailfrom=walterp@gmail.com X-Spamd-Result: default: False [-6.09 / 15.00]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; FREEMAIL_FROM(0.00)[gmail.com]; TO_DN_NONE(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; SUBJ_ALL_CAPS(0.45)[6]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+,1:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; DWL_DNSWL_NONE(0.00)[gmail.com.dwl.dnswl.org : 127.0.5.0]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20161025]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_SHORT(-0.75)[-0.753,0]; IP_SCORE(-2.78)[ip: (-8.37), ipnet: 2607:f8b0::/32(-3.22), asn: 15169(-2.26), country: US(-0.06)]; RCVD_IN_DNSWL_NONE(0.00)[5.3.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; RCVD_COUNT_TWO(0.00)[2] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 03:10:06 -0000 > > > Everytime I have seen this issue (and it's been more than once - though > until now recoverable - even if extremely painful) - its always been > during a resilver of a failed drive and something happening... panic, > another drive failure, power etc.. any other time its rock solid... > which is the yes and no... under normal circumstances zfs is very very > good and seems as safe as or safer than UFS... but my experience is ZFS > has one really bad flaw.. if there is a corruption in the metadata - > even if the stored data is 100% correct - it will fault the pool and > thats it it's gone barring some luck and painful recovery (backups > aside) ... this other file systems also suffer but there are tools that > *majority of the time* will get you out of the s**t with little pain. > Barring this windows based tool I haven't been able to run yet, zfs > appears to have nothing. > > > This is the difference I see here. You keep says that all of the data drive is 100% correct, that is only the meta data on the drive that is incorrect/corrupted. How do you know this? Especially, how to you know before you recovered the data from the drive. As ZFS meta data is stored redundantly on the drive and never in an inconsistent form (that is what fsck does, it fixes the inconsistent data that most other filesystems store when they crash/have disk issues). If the meta data is corrupted, how would ZFS know what other correct (computers don't understand things, they just follow the numbers)? If the redundant copies of the meta data are corrupt, what are the odds that the file data is corrupt? In my experience, getting the meta data trashed and none of the file data trashed is a rare event on a system with multi-drive redundancy. I have a friend/business partner that doesn't want to move to ZFS because his recovery method is wait for a single drive (no-redundancy, sometimes no backup) to fail and then use ddrescue to image the broken drive to a new drive (ignoring any file corruption because you can't really tell without ZFS). He's been using disk rescue programs for so long that he will not move to ZFS, because it doesn't have a disk rescue program. He has systems on Linux with ext3 and no mirroring or backups. I've asked about moving them to a mirrored ZFS system and he has told me that the customer doesn't want to pay for a second drive (but will pay for hours of his time to fix the problem when it happens). You kind of sound like him. ZFS is risky because there isn't a good drive rescue program. Sun's design was that the system should be redundant by default and checksum everything. If the drives fail, replace them. If they fail too much or too fast, restore from backup. Once the system had too much corruption, you can't recover/check for all the damage without a second off disk copy. If you have that off disk, then you have backup. They didn't build for the standard use case as found in PCs because the disk recover programs rarely get everything back, therefore they can't be relied on to get you data back when your data is important. Many PC owners have brought PC mindset ideas to the "UNIX" world. Sun's history predates Windows and Mac and comes from a Mini/Mainframe mindset (were people tried not to guess about data integrity). Would a disk rescue program for ZFS be a good idea? Sure. Should the lack of a disk recovery program stop you from using ZFS? No. If you think so, I suggest that you have your data integrity priorities in the wrong order (focusing on small, rare events rather than the common base case). Walter -- The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis From owner-freebsd-stable@freebsd.org Wed May 8 06:48:26 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 31C7D159FD57 for ; Wed, 8 May 2019 06:48:26 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E6C148EA13 for ; Wed, 8 May 2019 06:48:24 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.250] (unknown [192.148.167.11]) by proxypop01.sare.net (Postfix) with ESMTPA id DE4369DFA67; Wed, 8 May 2019 08:48:15 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Borja Marcos In-Reply-To: Date: Wed, 8 May 2019 08:48:14 +0200 Cc: freebsd-stable@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Walter Parker X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: E6C148EA13 X-Spamd-Bar: ----- X-Spamd-Result: default: False [-5.23 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:195.16.151.0/24]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.81)[-0.810,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[smtp.sarenet.es,smtp.sarenet.es,smtp.sarenet.es]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[151.151.16.195.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; DMARC_POLICY_ALLOW(-0.50)[sarenet.es,reject]; IP_SCORE(-2.56)[ip: (-7.13), ipnet: 195.16.128.0/19(-3.29), asn: 3262(-2.43), country: ES(0.04)]; FREEMAIL_TO(0.00)[gmail.com]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:3262, ipnet:195.16.128.0/19, country:ES]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 06:48:26 -0000 > On 8 May 2019, at 05:09, Walter Parker wrote: > Would a disk rescue program for ZFS be a good idea? Sure. Should the = lack > of a disk recovery program stop you from using ZFS? No. If you think = so, I > suggest that you have your data integrity priorities in the wrong = order > (focusing on small, rare events rather than the common base case). ZFS is certainly different from other flesystems. Its self healing = capabilities help it survive problems=20 that would destroy others. But if you reach a level of damage past that = =E2=80=9Ctolerable=E2=80=9D threshold consider yourself dead. Is it possible at all to write an effective repair tool? It would be = really complicated. By the way, ddrescue can help in a multiple drive failure scenery with = ZFS. If some of the drives are showing the typical problem of =E2=80=9Cflaky=E2=80=9D sectors with a = lot of retries slowing down the whole pool you can shut down the system or at least export the pool, copy the required = drive/s to fresh ones, replace the flaky drives and try to import the pool. I would first do the experiment = to make sure it=E2=80=99s harmless, but ZFS relies on labels written on the disks to import a pool = regardless of disk controller topology, devices names, uuids, or whatever. So a full disk copy should work.=20 Michelle, were you doing periodic scrubs? I=E2=80=99m not sure you = mentioned it.=20 Borja. From owner-freebsd-stable@freebsd.org Wed May 8 11:29:26 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4B34F15A77BA for ; Wed, 8 May 2019 11:29:26 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 1025F6EBDE for ; Wed, 8 May 2019 11:29:23 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=UTF-8; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR6001WKOK7ZI20@hades.sorbs.net> for freebsd-stable@freebsd.org; Wed, 08 May 2019 04:43:22 -0700 (PDT) Subject: Re: ZFS... To: Borja Marcos , Walter Parker Cc: freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> From: Michelle Sullivan Message-id: <6d3274c5-130f-8398-f272-af01d9551448@sorbs.net> Date: Wed, 08 May 2019 21:29:18 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: X-Rspamd-Queue-Id: 1025F6EBDE X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.22 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.98)[-0.975,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; NEURAL_HAM_SHORT(-0.63)[-0.626,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.36)[ip: (-0.89), ipnet: 72.12.192.0/19(-0.46), asn: 11114(-0.36), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 11:29:26 -0000 Borja Marcos via freebsd-stable wrote: > >> On 8 May 2019, at 05:09, Walter Parker wrote: >> Would a disk rescue program for ZFS be a good idea? Sure. Should the lack >> of a disk recovery program stop you from using ZFS? No. If you think so, I >> suggest that you have your data integrity priorities in the wrong order >> (focusing on small, rare events rather than the common base case). > ZFS is certainly different from other flesystems. Its self healing capabilities help it survive problems > that would destroy others. But if you reach a level of damage past that “tolerable” threshold consider > yourself dead. bingo. > > Is it possible at all to write an effective repair tool? It would be really complicated. which is why I don't think a 'repair tool' is the correct way to go.. I get the ZFS devs saying 'no' to it, I really do. A tool to scan and salvage (if possible) the data on it is what it needs I think... copy off, rebuild the structure (reformat) and copy back. This tool is what I was pointed at: https://www.klennet.com/zfs-recovery/default.aspx ... no idea if it works yet.. but if it does what it says it does it is the 'missing link' I'm looking for... just I am having issues getting Windows 7 with SP1 on a USB stick to get .net 4.5 on it to run the software... :/ (only been at it 2 days though, so time yet.) > > By the way, ddrescue can help in a multiple drive failure scenery with ZFS. Been there done that - that's how I rescued it when it was damaged in shipping.. though I think I used 'recoverdisk' rather than ddrescue ... pretty much the same thing if not the same code. sector copied all three dead drives to new drives, put the three dead back in, brought them back online and then let it resilver... the data was recovered intact and not reporting any permanent errors. > If some of the drives are > showing the typical problem of “flaky” sectors with a lot of retries slowing down the whole pool you can > shut down the system or at least export the pool, copy the required drive/s to fresh ones, replace the > flaky drives and try to import the pool. I would first do the experiment to make sure it’s harmless, > but ZFS relies on labels written on the disks to import a pool regardless of disk controller topology, > devices names, uuids, or whatever. So a full disk copy should work. Don't need to test it... been there done that - it works. > > Michelle, were you doing periodic scrubs? I’m not sure you mentioned it. > > Yes though once a month as it took 2 weeks to complete. Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Wed May 8 13:17:29 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DAF64158473F for ; Wed, 8 May 2019 13:17:28 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "gromit.dlib.vt.edu", Issuer "Chumby Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id EBCC77275C for ; Wed, 8 May 2019 13:17:27 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from mather.gromit23.net (c-98-244-101-97.hsd1.va.comcast.net [98.244.101.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gromit.dlib.vt.edu (Postfix) with ESMTPSA id 4A0D514F; Wed, 8 May 2019 09:17:21 -0400 (EDT) Content-Type: text/plain; charset=utf-8; delsp=yes; format=flowed Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Paul Mather In-Reply-To: <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> Date: Wed, 8 May 2019 09:17:20 -0400 Cc: freebsd-stable Content-Transfer-Encoding: 8bit Message-Id: <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: EBCC77275C X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dmarc=fail reason="" header.from=vt.edu (policy=none) X-Spamd-Result: default: False [-2.73 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; MV_CASE(0.50)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[chumby.dlib.vt.edu,gromit.dlib.vt.edu]; RCPT_COUNT_TWO(0.00)[2]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_HAM_SHORT(-0.76)[-0.764,0]; RECEIVED_SPAMHAUS_PBL(0.00)[97.101.244.98.zen.spamhaus.org : 127.0.0.10]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:1312, ipnet:128.173.0.0/16, country:US]; MID_RHS_MATCH_FROM(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(-0.91)[ip: (-2.30), ipnet: 128.173.0.0/16(-1.15), asn: 1312(-1.01), country: US(-0.06)]; TO_MATCH_ENVRCPT_SOME(0.00)[]; R_SPF_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; DMARC_POLICY_SOFTFAIL(0.10)[vt.edu : No valid SPF, No valid DKIM,none] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 13:17:29 -0000 On May 7, 2019, at 8:25 PM, Michelle Sullivan wrote: > Paul Mather wrote: >> On May 7, 2019, at 1:02 AM, Michelle Sullivan wrote: [[...]] >> >>> >>> Umm.. well I install by memory stick images and I had a 10.2 and an >>> 11.0 both of which had root on zfs as the default.. I had to manually >>> change them. I haven’t looked at anything later... so did something >>> change? Am I in cloud cookoo land? >> >> >> I don't know about that, but you may well be misremembering. I just >> pulled down the 10.2 and 11.0 installers from >> http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in >> both cases the choices listed in the "Partitioning" step are the same as >> in the current 12.0 installer: "Auto (UFS) Guided Disk Setup" is listed >> first and selected by default. "Auto (ZFS) Guided Root-on-ZFS" is >> listed last (you have to skip past other options such as manually >> partitioning by hand to select it). >> >> I'm confident in saying that ZFS is (or was) not the default >> partitioning option in either 10.2 or 11.0 as officially released by >> FreeBSD. >> >> Did you use a custom installer you made yourself when installing 10.2 or >> 11.0? > > it was an emergency USB stick.. so downloaded straight from the website. > > My process is boot, select "manual" (so I can set single partition and a > swap partition as historically it's done other things) select the whole > disk and create partition - this is where I saw it... 'freebsd-zfs' as > the default. Second 'create' defaults to 'freebsd-swap' which is always > correct. Interestingly the -CURRENT installer just says, "freebsd" and > not either -ufs or -zfs ... what ever that defaults to I don't know. I still fail to see from where you are getting the ZFS default idea. Using the 10.2 installer, for example, when you select "Manual" partitioning, and click through the defaults, the "Type" you are offered when creating the first file system is "freebsd-ufs". If you want to edit that, the help text says "Filesystem type (e.g. freebsd-ufs, freebsd-zfs, freebsd-swap)" (i.e., freebsd-ufs is listed preferentially to freebsd-zfs). That is all aside from the fact that by choosing to skip past the default "Auto (UFS) Guided Disk Setup" and choose "Manual Manual Disk Setup (experts)" you are choosing an option that assumes you are an "expert" and thus are knowledgeable and responsible for the choices you make, whatever the subsequent menus may offer. Again, I suggest there's no basis for the allegation that it's bad that FreeBSD is defaulting to ZFS because that is NOT what it's doing (and I'm unaware of any plans for 13 to do so). >> I don't see how any of this leads to the conclusion that ZFS is >> "dangerous" to use as a file system. > > For me 'dangerous' threshold is when it comes to 'all or nothing'. UFS - > even when trashed (and I might add I've never had it completely trashed > on a production image) there are tools to recover what is left of the > data. There are no such tools for zfs (barring the one I'm about to test > - which will be interesting to see if it works... but even then, > installing windows to recover freebsd :D ) You're saying that ZFS is dangerous because it has no tools for catastrophic data recovery... other than the one you are in the process of trying to use, and the ones that others on this thread have suggested to you. :-\ I'm having a hard time grappling with this logic. > >> What I believe is dangerous is relying on a post-mortem crash data >> recovery methodology as a substitute for a backup strategy for data >> that, in hindsight, is considered important enough to keep. No matter >> how resilient ZFS or UFS may be, they are no substitute for backups when >> it comes to data you care about. (File system resiliency will not >> protect you, e.g., from Ransomware or other malicious or accidental acts >> of data destruction.) > > True, but nothing is perfect, even backups (how many times have we seen > or heard of stories when Backups didn't actually work - and the problem > was only identified when trying to recover from a problem?) This is the nature of disaster recovery and continuity planning. The solutions adopted are individualistic and are commensurate with the anticipated risk/loss. I agree that backups are themselves subject to risk that must be managed. Yet I don't consider backups "dangerous". I don't know what the outcome of your risk assessment was and what you determined to be your RPO and RTO for disaster recovery so I can't comment whether it was realistic or not. Whatever you chose was based on your situation, not mine, and it is a choice you have to live with. (Bear in mind that "not to decide is to decide.") > My situation has been made worse by the fact I was reorganising > everything when it went down - so my backups (of the important stuff) > were not there and that was a direct consequence of me throwing caution > to the wind years before and stopping keeping the full mirror of the > data... I guess, at the time, "throwing caution to the wind" was a risk you were prepared to take (as well as accepting the consequences). > due to lack of space. Interestingly have had another drive die in the array - and it doesn't just have one or two sectors down it has a *lot* - which was not noticed by the original machine - I moved the drive to a byte copier which is where it's reporting 100's of sectors damaged... could this be compounded by zfs/mfi driver/hba not picking up errors like it should? Did you have regular pool scrubs enabled? It would have picked up silent data corruption like this. It does for me. Cheers, Paul. From owner-freebsd-stable@freebsd.org Wed May 8 14:00:02 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2C6821585E2A for ; Wed, 8 May 2019 14:00:02 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 3D0147451E for ; Wed, 8 May 2019 14:00:01 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR6001RVVJAZI30@hades.sorbs.net> for freebsd-stable@freebsd.org; Wed, 08 May 2019 07:14:00 -0700 (PDT) Subject: Re: ZFS... To: Paul Mather Cc: freebsd-stable References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> From: Michelle Sullivan Message-id: <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> Date: Wed, 08 May 2019 23:59:57 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> X-Rspamd-Queue-Id: 3D0147451E X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.81 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.974,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.35)[ip: (-0.87), ipnet: 72.12.192.0/19(-0.45), asn: 11114(-0.36), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.23)[-0.226,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 14:00:02 -0000 Paul Mather wrote: > >> due to lack of space. Interestingly have had another drive die in >> the array - and it doesn't just have one or two sectors down it has a >> *lot* - which was not noticed by the original machine - I moved the >> drive to a byte copier which is where it's reporting 100's of sectors >> damaged... could this be compounded by zfs/mfi driver/hba not picking >> up errors like it should? > > > Did you have regular pool scrubs enabled? It would have picked up > silent data corruption like this. It does for me. Yes, every month (once a month because, (1) the data doesn't change much (new data is added, old it not touched), and (2) because to complete it took 2 weeks.) Michelle -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Wed May 8 14:31:53 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8873E1588B25 for ; Wed, 8 May 2019 14:31:53 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "gromit.dlib.vt.edu", Issuer "Chumby Certificate Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 42B4175676 for ; Wed, 8 May 2019 14:31:52 +0000 (UTC) (envelope-from paul@gromit.dlib.vt.edu) Received: from mather.gromit23.net (c-98-244-101-97.hsd1.va.comcast.net [98.244.101.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gromit.dlib.vt.edu (Postfix) with ESMTPSA id 4D1C1153; Wed, 8 May 2019 10:31:49 -0400 (EDT) Content-Type: text/plain; charset=us-ascii; delsp=yes; format=flowed Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Paul Mather In-Reply-To: <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> Date: Wed, 8 May 2019 10:31:48 -0400 Cc: freebsd-stable Content-Transfer-Encoding: 7bit Message-Id: <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: 42B4175676 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; dmarc=fail reason="" header.from=vt.edu (policy=none) X-Spamd-Result: default: False [-2.70 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; NEURAL_HAM_MEDIUM(-1.00)[-0.997,0]; FROM_HAS_DN(0.00)[]; DMARC_POLICY_SOFTFAIL(0.10)[vt.edu : No valid SPF, No valid DKIM,none]; MV_CASE(0.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(-0.87)[ip: (-2.21), ipnet: 128.173.0.0/16(-1.10), asn: 1312(-0.98), country: US(-0.06)]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: chumby.dlib.vt.edu]; RCPT_COUNT_TWO(0.00)[2]; SUBJ_ALL_CAPS(0.45)[6]; R_SPF_NA(0.00)[]; NEURAL_HAM_SHORT(-0.77)[-0.775,0]; RECEIVED_SPAMHAUS_PBL(0.00)[97.101.244.98.zen.spamhaus.org : 127.0.0.10]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:1312, ipnet:128.173.0.0/16, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; FROM_EQ_ENVFROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 14:31:53 -0000 On May 8, 2019, at 9:59 AM, Michelle Sullivan wrote: > Paul Mather wrote: >>> due to lack of space. Interestingly have had another drive die in the >>> array - and it doesn't just have one or two sectors down it has a *lot* >>> - which was not noticed by the original machine - I moved the drive to >>> a byte copier which is where it's reporting 100's of sectors damaged... >>> could this be compounded by zfs/mfi driver/hba not picking up errors >>> like it should? >> >> >> Did you have regular pool scrubs enabled? It would have picked up >> silent data corruption like this. It does for me. > Yes, every month (once a month because, (1) the data doesn't change much > (new data is added, old it not touched), and (2) because to complete it > took 2 weeks.) Do you also run sysutils/smartmontools to monitor S.M.A.R.T. attributes? Although imperfect, it can sometimes signal trouble brewing with a drive (e.g., increasing Reallocated_Sector_Ct and Current_Pending_Sector counts) that can lead to proactive remediation before catastrophe strikes. Unless you have been gathering periodic drive metrics, you have no way of knowing whether these hundreds of bad sectors have happened suddenly or slowly over a period of time. Cheers, Paul. From owner-freebsd-stable@freebsd.org Wed May 8 15:14:15 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 31F8D1589B3E for ; Wed, 8 May 2019 15:14:15 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id B599D76E45 for ; Wed, 8 May 2019 15:14:13 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR600186YYXZI40@hades.sorbs.net> for freebsd-stable@freebsd.org; Wed, 08 May 2019 08:28:11 -0700 (PDT) Subject: Re: ZFS... To: Paul Mather Cc: freebsd-stable References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> From: Michelle Sullivan Message-id: <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> Date: Thu, 09 May 2019 01:14:08 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> X-Rspamd-Queue-Id: B599D76E45 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.43 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.973,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.34)[ip: (-0.86), ipnet: 72.12.192.0/19(-0.45), asn: 11114(-0.35), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.85)[-0.850,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 15:14:15 -0000 Paul Mather wrote: > On May 8, 2019, at 9:59 AM, Michelle Sullivan wrote: > >> Paul Mather wrote: >>>> due to lack of space. Interestingly have had another drive die in >>>> the array - and it doesn't just have one or two sectors down it has >>>> a *lot* - which was not noticed by the original machine - I moved >>>> the drive to a byte copier which is where it's reporting 100's of >>>> sectors damaged... could this be compounded by zfs/mfi driver/hba >>>> not picking up errors like it should? >>> >>> >>> Did you have regular pool scrubs enabled? It would have picked up >>> silent data corruption like this. It does for me. >> Yes, every month (once a month because, (1) the data doesn't change >> much (new data is added, old it not touched), and (2) because to >> complete it took 2 weeks.) > > > Do you also run sysutils/smartmontools to monitor S.M.A.R.T. > attributes? Although imperfect, it can sometimes signal trouble > brewing with a drive (e.g., increasing Reallocated_Sector_Ct and > Current_Pending_Sector counts) that can lead to proactive remediation > before catastrophe strikes. not Automatically > > Unless you have been gathering periodic drive metrics, you have no way > of knowing whether these hundreds of bad sectors have happened > suddenly or slowly over a period of time. no, it something i have thought about but been unable to spend the time on. -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Wed May 8 15:55:53 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 745F7158B460 for ; Wed, 8 May 2019 15:55:53 +0000 (UTC) (envelope-from walterp@gmail.com) Received: from mail-it1-x130.google.com (mail-it1-x130.google.com [IPv6:2607:f8b0:4864:20::130]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 445FC81E07 for ; Wed, 8 May 2019 15:55:52 +0000 (UTC) (envelope-from walterp@gmail.com) Received: by mail-it1-x130.google.com with SMTP id l7so4742756ite.2 for ; Wed, 08 May 2019 08:55:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=De8YFwOeLhK4arHt/FWLK0b/MuTj7Ahx2zCutEgWB+w=; b=j94n+mEW2jQ8sniAP+SKFgaGyNeykEoF39V7wyHREFihNomwBkvsZaXAS7bnKmVcHF vQaSbdJPxbqFRMBPpsru/M0JWYM8yrNFpRLaEj2Yf4Bscd8idgLSbkbzoxEA1sKG2Ttj L40NUKHxt124sGOtbNc7uCiG8HDZtv44O219XZehlzU1rb3qyDNXm9nqDKCLm2mfmGUC cwKbfy+EmQ5aUf4NdDwWCz88OFZP8ZdHrD0L/jk393QrgSLGJ1Tp4rIxl8Z2sHD8gzHW XH/9hC0W0qI8vXhcjeKbuoDLMau1dxYZYcty+/Si2kBaNkobrl6uE4wmkPiTxajlDPaF C8tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=De8YFwOeLhK4arHt/FWLK0b/MuTj7Ahx2zCutEgWB+w=; b=bw5ibMJSaoAkP8lJ+ydwmy0omC8EiNgdmkDyWBljkq7jGU7Fr9j2vnBg6husS6FwH3 b8QlUEzRczksrLOXzkHf3DGMX0CUw5/7ZS00PqTvAssLq5G8tr685r9AYuGNi32Xlvu8 jTMnJxWNCMxtn+w9FIlpcDB3W6j4n+YFggYvh3P77uVk3oDHrWdGWW6rFq01M93O3SDE Ujk0m7Mn8wljA/oiS8nvzwXneVN8BB5uneTyQ581KUUjamSgMa/9Buk+I18DPGxNJnqn 0Mcc6MOmZlHvypO4jiSNU+z0s+pF3Mj3Ch/5zjSlxz5tzsG5LA2LuileZorAQifAnfLJ Rbag== X-Gm-Message-State: APjAAAWooI4K2gAM63rLFLC8LLoSEdnK45fry/FGsircSRHpSvCW/4F0 h9Vq8xKYdG2GnH4ZPqKtyHqxjpfnAMqmDo61NaZS/pyI X-Google-Smtp-Source: APXvYqzloKj2Oxji73MZ9WyoaZPCdOBJ+FuHpfYS9n0DSU0B473H2pUzps0olzglDVqi/GkJ5SWHuOwd0q3xUZBjirg= X-Received: by 2002:a24:174d:: with SMTP id 74mr2242312ith.22.1557330950913; Wed, 08 May 2019 08:55:50 -0700 (PDT) MIME-Version: 1.0 References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> In-Reply-To: From: Walter Parker Date: Wed, 8 May 2019 08:55:37 -0700 Message-ID: Subject: Re: ZFS... To: freebsd-stable@freebsd.org X-Rspamd-Queue-Id: 445FC81E07 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20161025 header.b=j94n+mEW; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of walterp@gmail.com designates 2607:f8b0:4864:20::130 as permitted sender) smtp.mailfrom=walterp@gmail.com X-Spamd-Result: default: False [-6.33 / 15.00]; R_SPF_ALLOW(-0.20)[+ip6:2607:f8b0:4000::/36]; FREEMAIL_FROM(0.00)[gmail.com]; TO_DN_NONE(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; SUBJ_ALL_CAPS(0.45)[6]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+,1:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2607:f8b0::/32, country:US]; DWL_DNSWL_NONE(0.00)[gmail.com.dwl.dnswl.org : 127.0.5.0]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20161025]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_SHORT(-0.96)[-0.961,0]; IP_SCORE(-2.81)[ip: (-8.52), ipnet: 2607:f8b0::/32(-3.21), asn: 15169(-2.25), country: US(-0.06)]; RCVD_IN_DNSWL_NONE(0.00)[0.3.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.b.8.f.7.0.6.2.list.dnswl.org : 127.0.5.0]; RCVD_COUNT_TWO(0.00)[2] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 15:55:53 -0000 > > > ZDB (unless I'm misreading it) is able to find all 34m+ files and > verifies the checksums. The problem is in the zfs data structures (one > definitely, two maybe, metaslabs fail checksums preventing the mounting > (even read-only) of the volumes.) > > > Especially, how to you know > > before you recovered the data from the drive. > See above. > > > As ZFS meta data is stored > > redundantly on the drive and never in an inconsistent form (that is what > > fsck does, it fixes the inconsistent data that most other filesystems > store > > when they crash/have disk issues). > The problem - unless I'm reading zdb incorrectly - is limited to the > structure rather than the data. This fits with the fact the drive was > isolated from user changes when the drive was being resilvered so the > data itself was not being altered .. that said, I am no expert so I > could easily be completely wrong. > > What it sounds like you need is a meta data fixer, not a file recovery tool. Assuming the meta data can be fixed that would be the easy route. That sound not be hard to write if everything else on the disk has no issues. Don't you say in another message that the system is now returning 100's of drive errors. How does that relate the statement =>Everything on the disk is fine except for a little bit of corruption in the freespace map? > > > > > I have a friend/business partner that doesn't want to move to ZFS because > > his recovery method is wait for a single drive (no-redundancy, sometimes > no > > backup) to fail and then use ddrescue to image the broken drive to a new > > drive (ignoring any file corruption because you can't really tell without > > ZFS). He's been using disk rescue programs for so long that he will not > > move to ZFS, because it doesn't have a disk rescue program. > > The first part is rather cavilier .. the second part I kinda > understand... its why I'm now looking at alternatives ... particularly > being bitten as badly as I have with an unmountable volume. > > On the system I managed for him, we had a system with ZFS crap out. I restored it from a backup. I continue to believe that people running systems without backups are living on borrowed time. The idea of relying on a disk recovery tool is too risky for my taste. > > He has systems > > on Linux with ext3 and no mirroring or backups. I've asked about moving > > them to a mirrored ZFS system and he has told me that the customer > doesn't > > want to pay for a second drive (but will pay for hours of his time to fix > > the problem when it happens). You kind of sound like him. > Yeah..no! I'd be having that on a second (mirrored) drive... like most > of my production servers. > > > ZFS is risky > > because there isn't a good drive rescue program. > ZFS is good for some applications. ZFS is good to prevent cosmic ray > issues. ZFS is not good when things go wrong. ZFS doesn't usually go > wrong. Think that about sums it up. > > When it does go wrong I restore from backups. Therefore my systems don't have problems. I sorry you had the perfect trifecta that caused you to lose multiple drives and all your backups at the same time. > > Sun's design was that the > > system should be redundant by default and checksum everything. If the > > drives fail, replace them. If they fail too much or too fast, restore > from > > backup. Once the system had too much corruption, you can't recover/check > > for all the damage without a second off disk copy. If you have that off > > disk, then you have backup. They didn't build for the standard use case > as > > found in PCs because the disk recover programs rarely get everything > back, > > therefore they can't be relied on to get you data back when your data is > > important. Many PC owners have brought PC mindset ideas to the "UNIX" > > world. Sun's history predates Windows and Mac and comes from a > > Mini/Mainframe mindset (were people tried not to guess about data > > integrity). > I came from the days of Sun. > > Good then you should understand Sun's point of view. > > > > Would a disk rescue program for ZFS be a good idea? Sure. Should the lack > > of a disk recovery program stop you from using ZFS? No. If you think so, > I > > suggest that you have your data integrity priorities in the wrong order > > (focusing on small, rare events rather than the common base case). > Common case in your assessment in the email would suggest backups are > not needed unless you have a rare event of a multi-drive failure. Which > I know you're not advocating, but it is this same circular argument... > ZFS is so good it's never wrong we don't need no stinking recovery > tools, oh but take backups if it does fail, but it won't because it's so > good and you have to be running consumer hardware or doing something > wrong or be very unlucky with failures... etc.. round and round we go, > where ever she'll stop no-one knows. > > I advocate 2-3 backups of any important system (at least one different that the other, offsite if one can afford it). I never said ZFS is so good we don't need backups (that would be a stupid comment). As far as a recovery tool, those sound risky. I'd prefer something without so much risk. Make your own judgement, it is your time and data. I think ZFS is a great filesystem that anyone using FreeBSD or Illumios should be using. -- The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding. -- Justice Louis D. Brandeis From owner-freebsd-stable@freebsd.org Wed May 8 16:29:36 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 95DD6158CAD2 for ; Wed, 8 May 2019 16:29:36 +0000 (UTC) (envelope-from karl@denninger.net) Received: from colo1.denninger.net (colo1.denninger.net [104.236.120.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 56DBE84180 for ; Wed, 8 May 2019 16:29:35 +0000 (UTC) (envelope-from karl@denninger.net) Received: from denninger.net (ip68-1-57-197.pn.at.cox.net [68.1.57.197]) by colo1.denninger.net (Postfix) with ESMTP id 9129D2110AD for ; Wed, 8 May 2019 12:28:58 -0400 (EDT) Received: from [192.168.10.24] (D14.Denninger.Net [192.168.10.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by denninger.net (Postfix) with ESMTPSA id 956EDF135C for ; Wed, 8 May 2019 11:28:57 -0500 (CDT) Subject: Re: ZFS... To: freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> From: Karl Denninger Openpgp: preference=signencrypt Autocrypt: addr=karl@denninger.net; prefer-encrypt=mutual; keydata= mQINBFIX1zsBEADRcJfsQUl9oFeoMfLPJ1kql+3sIaYx0MfJAUhV9LnbWxr0fsWCskM1O4cV tHm5dqPkuPM4Ztc0jLotD1i9ubWvCHOlkLGxFOL+pFbjA+XZ7VKsC/xWmhMwJ3cM8HavK2OV SzEWQ/AEYtMi04IzGSwsxh/5/5R0mPHrsIomV5SbuiI0vjLuDj7fo6146AABI1ULzge4hBYW i/SHrqUrLORmUNBs6bxek79/B0Dzk5cIktD3LOfbT9EAa5J/osVkstMBhToJgQttaMIGv8SG CzpR/HwEokE+7DP+k2mLHnLj6H3kfugOF9pJH8Za4yFmw//s9cPXV8WwtZ2SKfVzn1unpKqf wmJ1PwJoom/d4fGvQDkgkGKRa6RGC6tPmXnqnx+YX4iCOdFfbP8L9rmk2sewDDVzHDU3I3ZZ 8hFIjMYM/QXXYszRatK0LCV0QPZuF7LCf4uQVKw1/oyJInsnH7+6a3c0h21x+CmSja9QJ+y0 yzgEN/nM89d6YTakfR+1xkYgodVmMy/bS8kmXbUUZG/CyeqCqc95RUySjKT2ECrf9GhhoQkl +D8n2MsrAUSMGB4GQSN+TIq9OBTpNuvATGSRuF9wnQcs1iSry+JNCpfRTyWp83uCNApe6oHU EET4Et6KDO3AvjvBMAX0TInTRGW2SQlJMuFKpc7Dg7tHK8zzqQARAQABtCNLYXJsIERlbm5p bmdlciA8a2FybEBkZW5uaW5nZXIubmV0PokCPAQTAQIAJgUCUhfXOwIbIwUJCWYBgAYLCQgH AwIEFQIIAwQWAgMBAh4BAheAAAoJEG6/sivc5s0PLxQP/i6x/QFx9G4Cw7C+LthhLXIm7NSH AtNbz2UjySEx2qkoQQjtsK6mcpEEaky4ky6t8gz0/SifIfJmSmyAx0UhUQ0WBv1vAXwtNrQQ jJd9Bj6l4c2083WaXyHPjt2u2Na6YFowyb4SaQb83hu/Zs25vkPQYJVVE0JX409MFVPUa6E3 zFbd1OTr3T4yNUy4gNeQZfzDqDS8slbIks2sXeoJrZ6qqXVI0ionoivOlaN4T6Q0UYyXtigj dQvvhMt0aNowKFjRqrmSDRpdz+o6yg7Mp7qEZ1V6EZk8KqQTH6htpCTQ8i79ttK4LG6bstSF Re6Fwq52nbrcANrcdmtZXqjo+SGbUqJ8b1ggrxAsJ5MEhRh2peKrCgI/TjQo+ZxfnqEoR4AI 46Cyiz+/lcVvlvmf2iPifS3EEdaH3Itfwt7MxFm6mQORYs6skHDw3tOYB2/AdCW6eRVYs2hB RMAG4uwApZfZDKgRoE95PJmQjeTBiGmRPcsQZtNESe7I7EjHtCDLwtJqvD4HkDDQwpzreT6W XkyIJ7ns7zDfA1E+AQhFR6rsTFGgQZRZKsVeov3SbhYKkCnVDCvb/PKQCAGkSZM9SvYG5Yax 8CMry3AefKktf9fqBFg8pWqtVxDwJr56dhi0GHXRu3jVI995rMGo1fLUG5fSxiZ8L5sAtokh 9WFmQpyl Message-ID: Date: Wed, 8 May 2019 11:28:57 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-512; boundary="------------ms050606090205020207040608" X-Rspamd-Queue-Id: 56DBE84180 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.16 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; HAS_ATTACHMENT(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[px.denninger.net]; NEURAL_HAM_SHORT(-0.87)[-0.866,0]; SUBJ_ALL_CAPS(0.45)[6]; FROM_EQ_ENVFROM(0.00)[]; IP_SCORE(-2.54)[ip: (-9.88), ipnet: 104.236.64.0/18(-4.18), asn: 14061(1.42), country: US(-0.06)]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:14061, ipnet:104.236.64.0/18, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:+]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[197.57.1.68.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; RCVD_TLS_LAST(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; DMARC_NA(0.00)[denninger.net]; R_SPF_NA(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 16:29:36 -0000 This is a cryptographically signed message in MIME format. --------------ms050606090205020207040608 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 5/8/2019 10:14, Michelle Sullivan wrote: > Paul Mather wrote: >> On May 8, 2019, at 9:59 AM, Michelle Sullivan >> wrote: >> >>>> Did you have regular pool scrubs enabled?=C2=A0 It would have picked= up >>>> silent data corruption like this.=C2=A0 It does for me. >>> Yes, every month (once a month because, (1) the data doesn't change >>> much (new data is added, old it not touched), and (2) because to >>> complete it took 2 weeks.) >> >> >> Do you also run sysutils/smartmontools to monitor S.M.A.R.T. >> attributes?=C2=A0 Although imperfect, it can sometimes signal trouble >> brewing with a drive (e.g., increasing Reallocated_Sector_Ct and >> Current_Pending_Sector counts) that can lead to proactive remediation >> before catastrophe strikes. > not Automatically >> >> Unless you have been gathering periodic drive metrics, you have no >> way of knowing whether these hundreds of bad sectors have happened >> suddenly or slowly over a period of time. > no, it something i have thought about but been unable to spend the > time on. > There are two issues here that would concern me greatly and IMHO you should address. I have a system here with about the same amount of net storage on it as you did.=C2=A0 It runs scrubs regularly; none of them take more than 8 ho= urs on *any* of the pools.=C2=A0 The SSD-based pool is of course *much* faste= r but even the many-way RaidZ2 on spinning rust is an ~8 hour deal; it kicks off automatically at 2:00 AM when the time comes but is complete before noon.=C2=A0 I run them on 14 day intervals. If you have pool(s) that are taking *two weeks* to run a scrub IMHO either something is badly wrong or you need to rethink organization of the pool structure -- that is, IMHO you likely either have a severe performance problem with one or more members or an architectural problem you *really* need to determine and fix.=C2=A0 If a scrub takes two weeks *then a resilver could conceivably take that long as well* and that's *extremely* bad as the window for getting screwed is at its worst when a resilver is being run. Second, smartmontools/smartd isn't the be-all, end-all but it *does* sometimes catch incipient problems with specific units before they turn into all-on death and IMHO in any installation of any material size where one cares about the data (as opposed to "if it fails just restore it from backup") it should be running.=C2=A0 It's very easy to set up and= there are no real downsides to using it.=C2=A0 I have one disk that I rot= ate in and out that was bought as a "refurb" and has 70 permanent relocated sectors on it.=C2=A0 It has never grown another one since I acquired it, = but every time it goes in the machine within minutes I get an alert on that.=C2=A0 If I was to ever get *71*, or a *different* drive grew a new = one said drive would get replaced *instantly*.=C2=A0 Over the years it has flagged two disks before they "hard failed" and both were immediately taken out of service, replaced and then destroyed and thrown away.=C2=A0 Maybe that's me being paranoid but IMHO it's the correct approach to such notifications. BTW that tool will *also* tell you if something else software-wise is going on that you *might* think is drive-related.=C2=A0 For example recen= tly here on the list I ran into a really oddball thing happening with SAS expanders that showed up with 12-STABLE and was *not* present in the same box with 11.1.=C2=A0 Smartmontools confirmed that while the driver w= as reporting errors from the disks *the disks themselves were not in fact taking errors.*=C2=A0 Had I not had that information I might well have traveled down a road that led to a catastrophic pool failure by attempting to replace disks that weren't actually bad.=C2=A0 The SAS expa= nder wound up being taken out of service and replaced with an HBA that has more ports -- the issues disappeared. Finally, while you *think* you only have a metadata problem I'm with the other people here in expressing disbelief that the damage is limited to that.=C2=A0 There is enough redundancy in the metadata on ZFS that if *al= l* copies are destroyed or inconsistent to the degree that they're unusable it's extremely likely that if you do get some sort of "disaster recovery" tool working you're going to find out that what you thought was a metadata problem is really a "you're hosed; the data is also gone" sort of problem. --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms050606090205020207040608 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC DdgwggagMIIEiKADAgECAhMA5EiKghDOXrvfxYxjITXYDdhIMA0GCSqGSIb3DQEBCwUAMIGL MQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJTmljZXZpbGxlMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExITAf BgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQTAeFw0xNzA4MTcxNjQyMTdaFw0yNzA4 MTUxNjQyMTdaMHsxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkwFwYDVQQKDBBD dWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExJTAjBgNVBAMMHEN1 ZGEgU3lzdGVtcyBMTEMgMjAxNyBJbnQgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQC1aJotNUI+W4jP7xQDO8L/b4XiF4Rss9O0B+3vMH7Njk85fZ052QhZpMVlpaaO+sCI KqG3oNEbuOHzJB/NDJFnqh7ijBwhdWutdsq23Ux6TvxgakyMPpT6TRNEJzcBVQA0kpby1DVD 0EKSK/FrWWBiFmSxg7qUfmIq/mMzgE6epHktyRM3OGq3dbRdOUgfumWrqHXOrdJz06xE9NzY vc9toqZnd79FUtE/nSZVm1VS3Grq7RKV65onvX3QOW4W1ldEHwggaZxgWGNiR/D4eosAGFxn uYeWlKEC70c99Mp1giWux+7ur6hc2E+AaTGh+fGeijO5q40OGd+dNMgK8Es0nDRw81lRcl24 SWUEky9y8DArgIFlRd6d3ZYwgc1DMTWkTavx3ZpASp5TWih6yI8ACwboTvlUYeooMsPtNa9E 6UQ1nt7VEi5syjxnDltbEFoLYcXBcqhRhFETJe9CdenItAHAtOya3w5+fmC2j/xJz29og1KH YqWHlo3Kswi9G77an+zh6nWkMuHs+03DU8DaOEWzZEav3lVD4u76bKRDTbhh0bMAk4eXriGL h4MUoX3Imfcr6JoyheVrAdHDL/BixbMH1UUspeRuqQMQ5b2T6pabXP0oOB4FqldWiDgJBGRd zWLgCYG8wPGJGYgHibl5rFiI5Ix3FQncipc6SdUzOQIDAQABo4IBCjCCAQYwHQYDVR0OBBYE FF3AXsKnjdPND5+bxVECGKtc047PMIHABgNVHSMEgbgwgbWAFBu1oRhUMNEzjODolDka5k4Q EDBioYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJ TmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5 c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYIJAKxAy1WBo2kY MBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IC AQCB5686UCBVIT52jO3sz9pKuhxuC2npi8ZvoBwt/IH9piPA15/CGF1XeXUdu2qmhOjHkVLN gO7XB1G8CuluxofOIUce0aZGyB+vZ1ylHXlMeB0R82f5dz3/T7RQso55Y2Vog2Zb7PYTC5B9 oNy3ylsnNLzanYlcW3AAfzZcbxYuAdnuq0Im3EpGm8DoItUcf1pDezugKm/yKtNtY6sDyENj tExZ377cYA3IdIwqn1Mh4OAT/Rmh8au2rZAo0+bMYBy9C11Ex0hQ8zWcvPZBDn4v4RtO8g+K uQZQcJnO09LJNtw94W3d2mj4a7XrsKMnZKvm6W9BJIQ4Nmht4wXAtPQ1xA+QpxPTmsGAU0Cv HmqVC7XC3qxFhaOrD2dsvOAK6Sn3MEpH/YrfYCX7a7cz5zW3DsJQ6o3pYfnnQz+hnwLlz4MK 17NIA0WOdAF9IbtQqarf44+PEyUbKtz1r0KGeGLs+VGdd2FLA0e7yuzxJDYcaBTVwqaHhU2/ Fna/jGU7BhrKHtJbb/XlLeFJ24yvuiYKpYWQSSyZu1R/gvZjHeGb344jGBsZdCDrdxtQQcVA 6OxsMAPSUPMrlg9LWELEEYnVulQJerWxpUecGH92O06wwmPgykkz//UmmgjVSh7ErNvL0lUY UMfunYVO/O5hwhW+P4gviCXzBFeTtDZH259O7TCCBzAwggUYoAMCAQICEwCg0WvVwekjGFiO 62SckFwepz0wDQYJKoZIhvcNAQELBQAwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3Jp ZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBD QTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExMQyAyMDE3IEludCBDQTAeFw0xNzA4MTcyMTIx MjBaFw0yMjA4MTYyMTIxMjBaMFcxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRswGQYDVQQDDBJrYXJsQGRlbm5pbmdlci5uZXQw ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC+HVSyxVtJhy3Ohs+PAGRuO//Dha9A 16l5FPATr6wude9zjX5f2lrkRyU8vhCXTZW7WbvWZKpcZ8r0dtZmiK9uF58Ec6hhvfkxJzbg 96WHBw5Fumd5ahZzuCJDtCAWW8R7/KN+zwzQf1+B3MVLmbaXAFBuKzySKhKMcHbK3/wjUYTg y+3UK6v2SBrowvkUBC+jxNg3Wy12GsTXcUS/8FYIXgVVPgfZZrbJJb5HWOQpvvhILpPCD3xs YJFNKEPltXKWHT7Qtc2HNqikgNwj8oqOb+PeZGMiWapsatKm8mxuOOGOEBhAoTVTwUHlMNTg 6QUCJtuWFCK38qOCyk9Haj+86lUU8RG6FkRXWgMbNQm1mWREQhw3axgGLSntjjnznJr5vsvX SYR6c+XKLd5KQZcS6LL8FHYNjqVKHBYM+hDnrTZMqa20JLAF1YagutDiMRURU23iWS7bA9tM cXcqkclTSDtFtxahRifXRI7Epq2GSKuEXe/1Tfb5CE8QsbCpGsfSwv2tZ/SpqVG08MdRiXxN 5tmZiQWo15IyWoeKOXl/hKxA9KPuDHngXX022b1ly+5ZOZbxBAZZMod4y4b4FiRUhRI97r9l CxsP/EPHuuTIZ82BYhrhbtab8HuRo2ofne2TfAWY2BlA7ExM8XShMd9bRPZrNTokPQPUCWCg CdIATQIDAQABo4IBzzCCAcswPAYIKwYBBQUHAQEEMDAuMCwGCCsGAQUFBzABhiBodHRwOi8v b2NzcC5jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIF oDAOBgNVHQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDMGCWCG SAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYDVR0O BBYEFLElmNWeVgsBPe7O8NiBzjvjYnpRMIHKBgNVHSMEgcIwgb+AFF3AXsKnjdPND5+bxVEC GKtc047PoYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UE BwwJTmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRh IFN5c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYITAORIioIQ zl6738WMYyE12A3YSDAdBgNVHREEFjAUgRJrYXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcN AQELBQADggIBAJXboPFBMLMtaiUt4KEtJCXlHO/3ZzIUIw/eobWFMdhe7M4+0u3te0sr77QR dcPKR0UeHffvpth2Mb3h28WfN0FmJmLwJk+pOx4u6uO3O0E1jNXoKh8fVcL4KU79oEQyYkbu 2HwbXBU9HbldPOOZDnPLi0whi/sbFHdyd4/w/NmnPgzAsQNZ2BYT9uBNr+jZw4SsluQzXG1X lFL/qCBoi1N2mqKPIepfGYF6drbr1RnXEJJsuD+NILLooTNf7PMgHPZ4VSWQXLNeFfygoOOK FiO0qfxPKpDMA+FHa8yNjAJZAgdJX5Mm1kbqipvb+r/H1UAmrzGMbhmf1gConsT5f8KU4n3Q IM2sOpTQe7BoVKlQM/fpQi6aBzu67M1iF1WtODpa5QUPvj1etaK+R3eYBzi4DIbCIWst8MdA 1+fEeKJFvMEZQONpkCwrJ+tJEuGQmjoQZgK1HeloepF0WDcviiho5FlgtAij+iBPtwMuuLiL shAXA5afMX1hYM4l11JXntle12EQFP1r6wOUkpOdxceCcMVDEJBBCHW2ZmdEaXgAm1VU+fnQ qS/wNw/S0X3RJT1qjr5uVlp2Y0auG/eG0jy6TT0KzTJeR9tLSDXprYkN2l/Qf7/nT6Q03qyE QnnKiBXWAZXveafyU/zYa7t3PTWFQGgWoC4w6XqgPo4KV44OMYIFBzCCBQMCAQEwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBglghkgBZQMEAgMFAKCCAkUw GAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNTA4MTYyODU3 WjBPBgkqhkiG9w0BCQQxQgRAi2hMCWk3BilFpSqf+utDZ785YVz/Z0KkBxgJAY3ESJM/Vprh aCmknB4w+smrpZZEsspqxHkD49b1wXako+Dz7TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFl AwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3 DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGjBgkrBgEEAYI3EAQxgZUwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTCBpQYLKoZIhvcNAQkQAgsxgZWg gZIwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lz dGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0 ZW1zIExMQyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBgkqhkiG9w0BAQEF AASCAgBmARnWdtPBpqibOi9gGTyJEtf3qzIvlFp+u5f/avUs1UYJxXV4jQb0lSjvOZyXmluC UDkf820S5QqRCZ5DoAU/4VmSsbYZ1tR0Ni6sVBtfwr8TnbzaDm1xsTEV+/i20zDTsc74Eni7 ZdnOs1v7nsPzLAk3vW2uNZAm4AV0LmJWJRhNojgL87yxqvWav9pPMYOHbYhpI8A0WGHTMXzZ J0nh1s1gs2YlO1CrpE8Q7eU526t2Gosvh6taRKvmIIj3RnbkD39ARm0/ihgL3jFy4Z8gPtNw wlCBBYAA2Hk8pdR8rIv7nme6wN0V7Qlx3WXx1jlkXlDHKiDCuon8aK4PkVVgHyfTT8QaB7FR PWF/Qfpx4uVjKexSjpoDjDkgIZdyY7a9rDHENT8BIYRYn0BCQLGKuxLnquNbN/3Y7nYZyybs ps7v+cASDusq3yAaEHGsAUTseCFzPNbBk31Px35qUGaGvF7Eigfm/v047bBSGEBhkbbx0TlV 9zOBNyHiCQ4Eb6AkTNKwHNe6cE+0GKlDpaScU/PD+jS0lkgPMxc6NPPvIpoIVjol3r23X6NS vFqyjAWtFrj4PeSFkxBJoXksUxqECpZAIPNMIHppSbdEipbCQ9W+OioUr7ImAn+J2iRy2Yw+ eFOAzgxbUuNikLlV0h0YFLOY+s6B1x55AcK2cJ+lMAAAAAAAAA== --------------ms050606090205020207040608-- From owner-freebsd-stable@freebsd.org Wed May 8 16:32:01 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5F4DA158CC28 for ; Wed, 8 May 2019 16:32:01 +0000 (UTC) (envelope-from wfc@mintsol.com) Received: from scully.mintsol.com (scully.mintsol.com [199.182.77.206]) by mx1.freebsd.org (Postfix) with ESMTP id 7F4768459A for ; Wed, 8 May 2019 16:32:00 +0000 (UTC) (envelope-from wfc@mintsol.com) Received: from mintsol.com (officecc.mintsol.com [96.85.114.33]) by scully.mintsol.com with esmtp; Wed, 08 May 2019 12:31:54 -0400 id 00ACDC53.000000005CD3047A.000074DC Received: from localhost (localhost [127.0.0.1]) (IDENT: uid 1002) by mintsol.com with esmtp; Wed, 08 May 2019 12:31:54 -0400 id 00000839.5CD3047A.000105F4 Date: Wed, 8 May 2019 12:31:54 -0400 (EDT) From: Walter Cramer To: Paul Mather cc: Michelle Sullivan , freebsd-stable Subject: Re: ZFS... In-Reply-To: <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> Message-ID: <20190508104026.C58567@mulder.mintsol.com> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 7F4768459A X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of wfc@mintsol.com designates 199.182.77.206 as permitted sender) smtp.mailfrom=wfc@mintsol.com X-Spamd-Result: default: False [-4.79 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+a:scully.mintsol.com]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[mintsol.com]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[bmx01.pofox.com]; NEURAL_HAM_SHORT(-0.95)[-0.953,0]; SUBJ_ALL_CAPS(0.45)[6]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:22768, ipnet:199.182.77.0/24, country:US]; IP_SCORE(-2.57)[ip: (-6.74), ipnet: 199.182.77.0/24(-3.37), asn: 22768(-2.70), country: US(-0.06)] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 16:32:01 -0000 On Wed, 8 May 2019, Paul Mather wrote: > On May 8, 2019, at 9:59 AM, Michelle Sullivan wrote: > >> Paul Mather wrote: >>>> due to lack of space. Interestingly have had another drive die in the >>>> array - and it doesn't just have one or two sectors down it has a *lot* - >>>> which was not noticed by the original machine - I moved the drive to a >>>> byte copier which is where it's reporting 100's of sectors damaged... >>>> could this be compounded by zfs/mfi driver/hba not picking up errors like >>>> it should? >>> >>> >>> Did you have regular pool scrubs enabled? It would have picked up silent >>> data corruption like this. It does for me. >> Yes, every month (once a month because, (1) the data doesn't change much >> (new data is added, old it not touched), and (2) because to complete it >> took 2 weeks.) > > > Do you also run sysutils/smartmontools to monitor S.M.A.R.T. attributes? > Although imperfect, it can sometimes signal trouble brewing with a drive > (e.g., increasing Reallocated_Sector_Ct and Current_Pending_Sector counts) > that can lead to proactive remediation before catastrophe strikes. > > Unless you have been gathering periodic drive metrics, you have no way of > knowing whether these hundreds of bad sectors have happened suddenly or > slowly over a period of time. > +1 Use `smartctl` from a cron script to do regular (say, weekly) *long* self-tests of hard drives, and also log (say, daily) all the SMART information from each drive. Then if a drive fails, you can at least check the logs for whether SMART noticed symptoms, and (if so) for other drives with symptoms. Or enhance this with a slightly longer script, which watches the logs for symptoms, and alerts you. (My experience is that SMART's *long* self-test checks the entire disk for read errors, without neither downside of `zpool scrub` - it does a fast, sequential read of the HD, including free space. That makes it a nice test for failing disk hardware; not a replacement for `zpool scrub`.) > Cheers, > > Paul. > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Wed May 8 16:54:07 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0F64C158D62F for ; Wed, 8 May 2019 16:54:07 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com [IPv6:2a00:1450:4864:20::135]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1048085289 for ; Wed, 8 May 2019 16:54:06 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by mail-lf1-x135.google.com with SMTP id u27so14792394lfg.10 for ; Wed, 08 May 2019 09:54:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=5wcIHI/N2jIUsnS/FDSo6sCjqYtyGk2n8THPYIaTXmw=; b=uFF3xKaaA8lKyPOkldtyMZ/CEz9llrLMBa1LoI7ku04UrMwEONFhJgGUXU59TbWv5a AEM8lI77lLdzTNWkBzA1qnyWxIlAiRSTkpq8Sl+r9xY6AGZZ65rCsyoctA/Qf8zYYkO0 YYmp2KjTeJvFacRUtspdyrmS4FtDq7FDTIl7PZ3CLW7+boWb8qy9FDCpZR9ce4aCAwjS my2xbbond/HxRKxoufxu+YQ51LamUUBpNyX1W0Iu2n9wTKzTUaCsD6JusrV+KPZA0lEX p6QK+OObb5UHuBMocrti2FKByiAe7bkZX/VRYyF1Sjdufzpi90yekoyyOXryCduZ4Z20 9X4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=5wcIHI/N2jIUsnS/FDSo6sCjqYtyGk2n8THPYIaTXmw=; b=FAg1iaoBVTtMgF/3yJ4pg0MAWqx9FRU4FsD3bl8N7okrdKpqd2eW52kwRAqO4iYTlo ncnNo1MWGE+tZSDliAIjieNqsSNidaVwrV9gRx1nkcz/bWoAIWhf3jrh3HofdCN73QZ3 mRBDqrTQo206LZgZCw2l4VQcXCZK1tS+BdAXEEBJGuOQct5PR+bMWbyUAL52ffuLxVnd 2YfjLIBhwvb3p12Z/gE7PXFRflBpKYfqx2xaRZRrGW4zX+XbcdgMSfhqdvrGC7WJiAB/ j6jX3bl/EnrNEtnfNTefTf9Ic1zBsWSJ9Vsb0itApmRncrChMdWN3wNdwcT9Kc6ZVb9Y f4ng== X-Gm-Message-State: APjAAAU47dLGxisjkkS90SNJCFdoVuQ7VgsDgskRbjSYlAk9WGAgZBLP K7zKm1IzL9EBdTVJOPpaQfNvQn3kW4f+cu+oZ1YvvQ== X-Google-Smtp-Source: APXvYqzfCdI9fgqCo98ZqWU0drro/3sttnKHdlScabAquFg93iXOqaun+O4vzBzMxWNPksvxZObFRzH1p1xi+aqZFRQ= X-Received: by 2002:ac2:554c:: with SMTP id l12mr14163590lfk.111.1557334442688; Wed, 08 May 2019 09:54:02 -0700 (PDT) MIME-Version: 1.0 References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> In-Reply-To: From: Freddie Cash Date: Wed, 8 May 2019 09:53:49 -0700 Message-ID: Subject: Re: ZFS... To: FreeBSD Stable X-Rspamd-Queue-Id: 1048085289 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dkim=pass header.d=gmail.com header.s=20161025 header.b=uFF3xKaa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (mx1.freebsd.org: domain of fjwcash@gmail.com designates 2a00:1450:4864:20::135 as permitted sender) smtp.mailfrom=fjwcash@gmail.com X-Spamd-Result: default: False [-6.32 / 15.00]; R_SPF_ALLOW(-0.20)[+ip6:2a00:1450:4000::/36]; FREEMAIL_FROM(0.00)[gmail.com]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[gmail.com:+]; DMARC_POLICY_ALLOW(-0.50)[gmail.com,none]; SUBJ_ALL_CAPS(0.45)[6]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+,1:+]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:2a00:1450::/32, country:US]; DWL_DNSWL_NONE(0.00)[gmail.com.dwl.dnswl.org : 127.0.5.0]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; R_DKIM_ALLOW(-0.20)[gmail.com:s=20161025]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; RCPT_COUNT_ONE(0.00)[1]; NEURAL_HAM_SHORT(-0.94)[-0.943,0]; IP_SCORE(-2.81)[ip: (-9.53), ipnet: 2a00:1450::/32(-2.23), asn: 15169(-2.25), country: US(-0.06)]; RCVD_IN_DNSWL_NONE(0.00)[5.3.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.0.4.6.8.4.0.5.4.1.0.0.a.2.list.dnswl.org : 127.0.5.0]; RCVD_COUNT_TWO(0.00)[2] Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 16:54:07 -0000 On Wed, May 8, 2019 at 9:31 AM Karl Denninger wrote: > I have a system here with about the same amount of net storage on it as > you did. It runs scrubs regularly; none of them take more than 8 hours > on *any* of the pools. The SSD-based pool is of course *much* faster > but even the many-way RaidZ2 on spinning rust is an ~8 hour deal; it > kicks off automatically at 2:00 AM when the time comes but is complete > before noon. I run them on 14 day intervals. > Damn, I wish our scrubs took 8 hours. :) Storage pool 1: 90 drives in 6-disk raidz2 vdevs (mix of 2 TB and 4 TB SATA). 45 hours to scrub. Storage pool 2: 90 drives in 6-disk raidz2 vdevs (mix of 2 TB and 4 TB SATA). 33 hours to scrub. Storage pool 3: 24 drives in 6-disk raidz2 vdevs (mix of 2 TB and 4 TB SATA). 134 hours to scrub. Storage pool 4: 24 drives in 6-disk raidz2 vdevs (mix of 1 TB, 2 TB, 4 TB SATA). Dedupe enabled. 256 hours to scrub. Storage pool 5: 90 drives in 6-disk raidz2 vdevs (mix of 2 TB and 4 TB SATA). Dedupe enabled. Takes about 6 weeks to resilver a drive, and it's constantly resilvering drives these days as it's the oldest pool, and all the drives are dying. :D Pools 1, 3, and 4 are in DC1. Pools 2 and 5 are in DC2 across town. Pool 1 sends snapshots to pool 2. Pools 3 and 4 send snapshots to pool 5. These pools are highly fragmented. :) > If you have pool(s) that are taking *two weeks* to run a scrub IMHO > either something is badly wrong or you need to rethink organization of > the pool structure -- that is, IMHO you likely either have a severe > performance problem with one or more members or an architectural problem > you *really* need to determine and fix. If a scrub takes two weeks > *then a resilver could conceivably take that long as well* and that's > *extremely* bad as the window for getting screwed is at its worst when a > resilver is being run. > Thankfully, ours are strictly storage for backups of other systems, so as long as the nightly backups complete successfully before 6 am, we're not worried about performance. :) And we do have plans to replace pools 2 and 5 to remove dedupe from the equation. There's not a lot we can do about the fragmentation issue, as these servers all run rsync backups from 200-odd other servers, and remove the oldest snapshot every night. So, while a 2-week scrub may be horrible, it all depends on the use-case. If these were direct storage systems for in-production servers, then I'd be worried. But as redundant backup systems (3 copies of everything, in 3 separate locations around the city), I'm not too worried. Yet. :D -- Freddie Cash fjwcash@gmail.com From owner-freebsd-stable@freebsd.org Wed May 8 17:04:42 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7CC9A158DE67 for ; Wed, 8 May 2019 17:04:42 +0000 (UTC) (envelope-from karl@denninger.net) Received: from colo1.denninger.net (colo1.denninger.net [104.236.120.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B35FE85A14 for ; Wed, 8 May 2019 17:04:41 +0000 (UTC) (envelope-from karl@denninger.net) Received: from denninger.net (ip68-1-57-197.pn.at.cox.net [68.1.57.197]) by colo1.denninger.net (Postfix) with ESMTP id 4EAC52110AD for ; Wed, 8 May 2019 13:04:40 -0400 (EDT) Received: from [192.168.10.24] (D14.Denninger.Net [192.168.10.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by denninger.net (Postfix) with ESMTPSA id 9014BF1478 for ; Wed, 8 May 2019 12:04:39 -0500 (CDT) Subject: Re: ZFS... To: freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> From: Karl Denninger Openpgp: preference=signencrypt Autocrypt: addr=karl@denninger.net; prefer-encrypt=mutual; keydata= mQINBFIX1zsBEADRcJfsQUl9oFeoMfLPJ1kql+3sIaYx0MfJAUhV9LnbWxr0fsWCskM1O4cV tHm5dqPkuPM4Ztc0jLotD1i9ubWvCHOlkLGxFOL+pFbjA+XZ7VKsC/xWmhMwJ3cM8HavK2OV SzEWQ/AEYtMi04IzGSwsxh/5/5R0mPHrsIomV5SbuiI0vjLuDj7fo6146AABI1ULzge4hBYW i/SHrqUrLORmUNBs6bxek79/B0Dzk5cIktD3LOfbT9EAa5J/osVkstMBhToJgQttaMIGv8SG CzpR/HwEokE+7DP+k2mLHnLj6H3kfugOF9pJH8Za4yFmw//s9cPXV8WwtZ2SKfVzn1unpKqf wmJ1PwJoom/d4fGvQDkgkGKRa6RGC6tPmXnqnx+YX4iCOdFfbP8L9rmk2sewDDVzHDU3I3ZZ 8hFIjMYM/QXXYszRatK0LCV0QPZuF7LCf4uQVKw1/oyJInsnH7+6a3c0h21x+CmSja9QJ+y0 yzgEN/nM89d6YTakfR+1xkYgodVmMy/bS8kmXbUUZG/CyeqCqc95RUySjKT2ECrf9GhhoQkl +D8n2MsrAUSMGB4GQSN+TIq9OBTpNuvATGSRuF9wnQcs1iSry+JNCpfRTyWp83uCNApe6oHU EET4Et6KDO3AvjvBMAX0TInTRGW2SQlJMuFKpc7Dg7tHK8zzqQARAQABtCNLYXJsIERlbm5p bmdlciA8a2FybEBkZW5uaW5nZXIubmV0PokCPAQTAQIAJgUCUhfXOwIbIwUJCWYBgAYLCQgH AwIEFQIIAwQWAgMBAh4BAheAAAoJEG6/sivc5s0PLxQP/i6x/QFx9G4Cw7C+LthhLXIm7NSH AtNbz2UjySEx2qkoQQjtsK6mcpEEaky4ky6t8gz0/SifIfJmSmyAx0UhUQ0WBv1vAXwtNrQQ jJd9Bj6l4c2083WaXyHPjt2u2Na6YFowyb4SaQb83hu/Zs25vkPQYJVVE0JX409MFVPUa6E3 zFbd1OTr3T4yNUy4gNeQZfzDqDS8slbIks2sXeoJrZ6qqXVI0ionoivOlaN4T6Q0UYyXtigj dQvvhMt0aNowKFjRqrmSDRpdz+o6yg7Mp7qEZ1V6EZk8KqQTH6htpCTQ8i79ttK4LG6bstSF Re6Fwq52nbrcANrcdmtZXqjo+SGbUqJ8b1ggrxAsJ5MEhRh2peKrCgI/TjQo+ZxfnqEoR4AI 46Cyiz+/lcVvlvmf2iPifS3EEdaH3Itfwt7MxFm6mQORYs6skHDw3tOYB2/AdCW6eRVYs2hB RMAG4uwApZfZDKgRoE95PJmQjeTBiGmRPcsQZtNESe7I7EjHtCDLwtJqvD4HkDDQwpzreT6W XkyIJ7ns7zDfA1E+AQhFR6rsTFGgQZRZKsVeov3SbhYKkCnVDCvb/PKQCAGkSZM9SvYG5Yax 8CMry3AefKktf9fqBFg8pWqtVxDwJr56dhi0GHXRu3jVI995rMGo1fLUG5fSxiZ8L5sAtokh 9WFmQpyl Message-ID: <4a3b65e1-f3a6-58cf-3de3-bdd11cf30b02@denninger.net> Date: Wed, 8 May 2019 12:04:39 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-512; boundary="------------ms000207060908040101050000" X-Rspamd-Queue-Id: B35FE85A14 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.23 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; HAS_ATTACHMENT(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[cached: px.denninger.net]; NEURAL_HAM_SHORT(-0.93)[-0.930,0]; SUBJ_ALL_CAPS(0.45)[6]; FROM_EQ_ENVFROM(0.00)[]; IP_SCORE(-2.54)[ip: (-9.88), ipnet: 104.236.64.0/18(-4.19), asn: 14061(1.42), country: US(-0.06)]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:14061, ipnet:104.236.64.0/18, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:+]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[197.57.1.68.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; RCVD_TLS_LAST(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; DMARC_NA(0.00)[denninger.net]; R_SPF_NA(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 17:04:42 -0000 This is a cryptographically signed message in MIME format. --------------ms000207060908040101050000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 5/8/2019 11:53, Freddie Cash wrote: > On Wed, May 8, 2019 at 9:31 AM Karl Denninger wrot= e: > >> I have a system here with about the same amount of net storage on it a= s >> you did. It runs scrubs regularly; none of them take more than 8 hour= s >> on *any* of the pools. The SSD-based pool is of course *much* faster >> but even the many-way RaidZ2 on spinning rust is an ~8 hour deal; it >> kicks off automatically at 2:00 AM when the time comes but is complete= >> before noon. I run them on 14 day intervals. >> > .... (description elided) That is a /lot /bigger pool than either Michelle or I are describing. We're both in the ~20Tb of storage space area.=C2=A0 You're running 5-10x= that in usable space in some of these pools and yet seeing ~2 day scrub times on a couple of them (that is, the organization looks pretty reasonable given the size and so is the scrub time), one that's ~5 days and likely has some issues with parallelism and fragmentation, and then, well, two awfuls which are both dedup-enabled. --=20 Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/ --------------ms000207060908040101050000 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC DdgwggagMIIEiKADAgECAhMA5EiKghDOXrvfxYxjITXYDdhIMA0GCSqGSIb3DQEBCwUAMIGL MQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJTmljZXZpbGxlMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExITAf BgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQTAeFw0xNzA4MTcxNjQyMTdaFw0yNzA4 MTUxNjQyMTdaMHsxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkwFwYDVQQKDBBD dWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExJTAjBgNVBAMMHEN1 ZGEgU3lzdGVtcyBMTEMgMjAxNyBJbnQgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQC1aJotNUI+W4jP7xQDO8L/b4XiF4Rss9O0B+3vMH7Njk85fZ052QhZpMVlpaaO+sCI KqG3oNEbuOHzJB/NDJFnqh7ijBwhdWutdsq23Ux6TvxgakyMPpT6TRNEJzcBVQA0kpby1DVD 0EKSK/FrWWBiFmSxg7qUfmIq/mMzgE6epHktyRM3OGq3dbRdOUgfumWrqHXOrdJz06xE9NzY vc9toqZnd79FUtE/nSZVm1VS3Grq7RKV65onvX3QOW4W1ldEHwggaZxgWGNiR/D4eosAGFxn uYeWlKEC70c99Mp1giWux+7ur6hc2E+AaTGh+fGeijO5q40OGd+dNMgK8Es0nDRw81lRcl24 SWUEky9y8DArgIFlRd6d3ZYwgc1DMTWkTavx3ZpASp5TWih6yI8ACwboTvlUYeooMsPtNa9E 6UQ1nt7VEi5syjxnDltbEFoLYcXBcqhRhFETJe9CdenItAHAtOya3w5+fmC2j/xJz29og1KH YqWHlo3Kswi9G77an+zh6nWkMuHs+03DU8DaOEWzZEav3lVD4u76bKRDTbhh0bMAk4eXriGL h4MUoX3Imfcr6JoyheVrAdHDL/BixbMH1UUspeRuqQMQ5b2T6pabXP0oOB4FqldWiDgJBGRd zWLgCYG8wPGJGYgHibl5rFiI5Ix3FQncipc6SdUzOQIDAQABo4IBCjCCAQYwHQYDVR0OBBYE FF3AXsKnjdPND5+bxVECGKtc047PMIHABgNVHSMEgbgwgbWAFBu1oRhUMNEzjODolDka5k4Q EDBioYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJ TmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5 c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYIJAKxAy1WBo2kY MBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IC AQCB5686UCBVIT52jO3sz9pKuhxuC2npi8ZvoBwt/IH9piPA15/CGF1XeXUdu2qmhOjHkVLN gO7XB1G8CuluxofOIUce0aZGyB+vZ1ylHXlMeB0R82f5dz3/T7RQso55Y2Vog2Zb7PYTC5B9 oNy3ylsnNLzanYlcW3AAfzZcbxYuAdnuq0Im3EpGm8DoItUcf1pDezugKm/yKtNtY6sDyENj tExZ377cYA3IdIwqn1Mh4OAT/Rmh8au2rZAo0+bMYBy9C11Ex0hQ8zWcvPZBDn4v4RtO8g+K uQZQcJnO09LJNtw94W3d2mj4a7XrsKMnZKvm6W9BJIQ4Nmht4wXAtPQ1xA+QpxPTmsGAU0Cv HmqVC7XC3qxFhaOrD2dsvOAK6Sn3MEpH/YrfYCX7a7cz5zW3DsJQ6o3pYfnnQz+hnwLlz4MK 17NIA0WOdAF9IbtQqarf44+PEyUbKtz1r0KGeGLs+VGdd2FLA0e7yuzxJDYcaBTVwqaHhU2/ Fna/jGU7BhrKHtJbb/XlLeFJ24yvuiYKpYWQSSyZu1R/gvZjHeGb344jGBsZdCDrdxtQQcVA 6OxsMAPSUPMrlg9LWELEEYnVulQJerWxpUecGH92O06wwmPgykkz//UmmgjVSh7ErNvL0lUY UMfunYVO/O5hwhW+P4gviCXzBFeTtDZH259O7TCCBzAwggUYoAMCAQICEwCg0WvVwekjGFiO 62SckFwepz0wDQYJKoZIhvcNAQELBQAwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3Jp ZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBD QTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExMQyAyMDE3IEludCBDQTAeFw0xNzA4MTcyMTIx MjBaFw0yMjA4MTYyMTIxMjBaMFcxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRswGQYDVQQDDBJrYXJsQGRlbm5pbmdlci5uZXQw ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC+HVSyxVtJhy3Ohs+PAGRuO//Dha9A 16l5FPATr6wude9zjX5f2lrkRyU8vhCXTZW7WbvWZKpcZ8r0dtZmiK9uF58Ec6hhvfkxJzbg 96WHBw5Fumd5ahZzuCJDtCAWW8R7/KN+zwzQf1+B3MVLmbaXAFBuKzySKhKMcHbK3/wjUYTg y+3UK6v2SBrowvkUBC+jxNg3Wy12GsTXcUS/8FYIXgVVPgfZZrbJJb5HWOQpvvhILpPCD3xs YJFNKEPltXKWHT7Qtc2HNqikgNwj8oqOb+PeZGMiWapsatKm8mxuOOGOEBhAoTVTwUHlMNTg 6QUCJtuWFCK38qOCyk9Haj+86lUU8RG6FkRXWgMbNQm1mWREQhw3axgGLSntjjnznJr5vsvX SYR6c+XKLd5KQZcS6LL8FHYNjqVKHBYM+hDnrTZMqa20JLAF1YagutDiMRURU23iWS7bA9tM cXcqkclTSDtFtxahRifXRI7Epq2GSKuEXe/1Tfb5CE8QsbCpGsfSwv2tZ/SpqVG08MdRiXxN 5tmZiQWo15IyWoeKOXl/hKxA9KPuDHngXX022b1ly+5ZOZbxBAZZMod4y4b4FiRUhRI97r9l CxsP/EPHuuTIZ82BYhrhbtab8HuRo2ofne2TfAWY2BlA7ExM8XShMd9bRPZrNTokPQPUCWCg CdIATQIDAQABo4IBzzCCAcswPAYIKwYBBQUHAQEEMDAuMCwGCCsGAQUFBzABhiBodHRwOi8v b2NzcC5jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIF oDAOBgNVHQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDMGCWCG SAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYDVR0O BBYEFLElmNWeVgsBPe7O8NiBzjvjYnpRMIHKBgNVHSMEgcIwgb+AFF3AXsKnjdPND5+bxVEC GKtc047PoYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UE BwwJTmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRh IFN5c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYITAORIioIQ zl6738WMYyE12A3YSDAdBgNVHREEFjAUgRJrYXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcN AQELBQADggIBAJXboPFBMLMtaiUt4KEtJCXlHO/3ZzIUIw/eobWFMdhe7M4+0u3te0sr77QR dcPKR0UeHffvpth2Mb3h28WfN0FmJmLwJk+pOx4u6uO3O0E1jNXoKh8fVcL4KU79oEQyYkbu 2HwbXBU9HbldPOOZDnPLi0whi/sbFHdyd4/w/NmnPgzAsQNZ2BYT9uBNr+jZw4SsluQzXG1X lFL/qCBoi1N2mqKPIepfGYF6drbr1RnXEJJsuD+NILLooTNf7PMgHPZ4VSWQXLNeFfygoOOK FiO0qfxPKpDMA+FHa8yNjAJZAgdJX5Mm1kbqipvb+r/H1UAmrzGMbhmf1gConsT5f8KU4n3Q IM2sOpTQe7BoVKlQM/fpQi6aBzu67M1iF1WtODpa5QUPvj1etaK+R3eYBzi4DIbCIWst8MdA 1+fEeKJFvMEZQONpkCwrJ+tJEuGQmjoQZgK1HeloepF0WDcviiho5FlgtAij+iBPtwMuuLiL shAXA5afMX1hYM4l11JXntle12EQFP1r6wOUkpOdxceCcMVDEJBBCHW2ZmdEaXgAm1VU+fnQ qS/wNw/S0X3RJT1qjr5uVlp2Y0auG/eG0jy6TT0KzTJeR9tLSDXprYkN2l/Qf7/nT6Q03qyE QnnKiBXWAZXveafyU/zYa7t3PTWFQGgWoC4w6XqgPo4KV44OMYIFBzCCBQMCAQEwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBglghkgBZQMEAgMFAKCCAkUw GAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNTA4MTcwNDM5 WjBPBgkqhkiG9w0BCQQxQgRAf+ugk7tuzUb1XaHreDK8kiD1Aza25iH1azkN7//KkxJXdU/8 Xund1jnIRnc0vBrdNVBKh+DA9tYb78TetkTmwzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFl AwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3 DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGjBgkrBgEEAYI3EAQxgZUwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTCBpQYLKoZIhvcNAQkQAgsxgZWg gZIwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lz dGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0 ZW1zIExMQyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBgkqhkiG9w0BAQEF AASCAgAuNHB/CXSpj3Su/oEaQBvCSG9sJnrhjxWbtZ5WFXAHQvbpqysoLuwKpuqnqDNshLb1 lk6sjZjU0jUQxSp1Ekk5fMWxHN+IYd8RXOjiTGrOyx3oaXXpOOel5HgPek4K+UvioHF/v0j1 qwjgAdMwyS+YySqdDuSRzP2HlcW7xp6d80AtVElYGQC3GXkgB5KcqX+fEBVXF6VvsOEg5Qoq 6Rjuu06v75d/SrHXBTV5YtTsX2+LdyRjWBNZ0voPU1wbngiN/epNWZ17M3y6M4XTQZyPnzuN VJUaSFp4DutFy7jWPdoqGVXZtVAZYdYYHUDnKRMVw9Rl8lgfZxpAsyPyLMvOI2Yw7siPStTm i9SUTT9Cg3J5I44wEKhDP0a6ratAozmIglmp514AZ0d28a9gVV7Z3k8ZAvWmRCe0sk9VYV26 f6TxVN14LGxYJ9ff2b8UK5gSlp1y1mLcriOyOotB2wvYs8+RVUvciMaVbGa240Dmmq2Fp+/S JCsbDjdivN+Qrfg6y1EG5s7Hv3wUj3NSz5Hq7EHRjKmWMT985elN38+rHadGNpz82D25BsUy DloXmvGdKSf8m6O1a+ou7bN2gZulgoIAhAPaXRGiAMaGYkXBsMQcgyinNSGgivGecPYqkMAt EhqQ41L/KUfb/Sk4y5WS3OFsNyNmZxT1G0T+txJFYAAAAAAAAA== --------------ms000207060908040101050000-- From owner-freebsd-stable@freebsd.org Wed May 8 22:45:35 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5EF0015954A4 for ; Wed, 8 May 2019 22:45:35 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 2731F6AD28 for ; Wed, 8 May 2019 22:45:33 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR70016PJV5ZI70@hades.sorbs.net> for freebsd-stable@freebsd.org; Wed, 08 May 2019 15:59:32 -0700 (PDT) Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: <4a3b65e1-f3a6-58cf-3de3-bdd11cf30b02@denninger.net> Date: Thu, 09 May 2019 08:45:28 +1000 Cc: freebsd-stable@freebsd.org Message-id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> <4a3b65e1-f3a6-58cf-3de3-bdd11cf30b02@denninger.net> To: Karl Denninger X-Rspamd-Queue-Id: 2731F6AD28 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.94 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.973,0]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.34)[ip: (-0.84), ipnet: 72.12.192.0/19(-0.44), asn: 11114(-0.34), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.37)[-0.375,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 22:45:35 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 03:04, Karl Denninger wrote: > >> On 5/8/2019 11:53, Freddie Cash wrote: >>> On Wed, May 8, 2019 at 9:31 AM Karl Denninger wrote: >>> >>> I have a system here with about the same amount of net storage on it as >>> you did. It runs scrubs regularly; none of them take more than 8 hours >>> on *any* of the pools. The SSD-based pool is of course *much* faster >>> but even the many-way RaidZ2 on spinning rust is an ~8 hour deal; it >>> kicks off automatically at 2:00 AM when the time comes but is complete >>> before noon. I run them on 14 day intervals. >>> >> .... (description elided) > > That is a /lot /bigger pool than either Michelle or I are describing. Not quite... My pool is 16*3T SATA (real spindles not SSD and no cache) =48T raw.. It is storage, remember write once or twice, read lots elsewhere in the thread? > > We're both in the ~20Tb of storage space area 20T +6T zvol was what was left on it whilst shuffling stuff around.. I had already moved off 8T of data.. the zvol is still fine and accessible. > . You're running 5-10x > that in usable space in some of these pools and yet seeing ~2 day scrub > times on a couple of them (that is, the organization looks pretty > reasonable given the size and so is the scrub time), one that's ~5 days > and likely has some issues with parallelism and fragmentation, and then, > well, two awfuls which are both dedup-enabled. > > -- > Karl Denninger > karl@denninger.net > /The Market Ticker/ > /[S/MIME encrypted email preferred]/ From owner-freebsd-stable@freebsd.org Wed May 8 22:55:33 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 15A301595ADE for ; Wed, 8 May 2019 22:55:33 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 4DCBD6B44B for ; Wed, 8 May 2019 22:55:32 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR70018VKBSZI70@hades.sorbs.net> for freebsd-stable@freebsd.org; Wed, 08 May 2019 16:09:31 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Thu, 09 May 2019 08:55:28 +1000 Cc: freebsd-stable@freebsd.org Content-transfer-encoding: quoted-printable Message-id: <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Walter Parker X-Rspamd-Queue-Id: 4DCBD6B44B X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.09 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.968,0]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; NEURAL_HAM_SHORT(-0.03)[-0.030,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.33)[ip: (-0.82), ipnet: 72.12.192.0/19(-0.43), asn: 11114(-0.34), country: US(-0.06)]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 22:55:33 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad On 09 May 2019, at 01:55, Walter Parker wrote: >>=20 >>=20 >> ZDB (unless I'm misreading it) is able to find all 34m+ files and >> verifies the checksums. The problem is in the zfs data structures (one >> definitely, two maybe, metaslabs fail checksums preventing the mounting >> (even read-only) of the volumes.) >>=20 >>> Especially, how to you know >>> before you recovered the data from the drive. >> See above. >>=20 >>> As ZFS meta data is stored >>> redundantly on the drive and never in an inconsistent form (that is what= >>> fsck does, it fixes the inconsistent data that most other filesystems >> store >>> when they crash/have disk issues). >> The problem - unless I'm reading zdb incorrectly - is limited to the >> structure rather than the data. This fits with the fact the drive was >> isolated from user changes when the drive was being resilvered so the >> data itself was not being altered .. that said, I am no expert so I >> could easily be completely wrong. >>=20 >> What it sounds like you need is a meta data fixer, not a file recovery > tool. This is true, but I am of the thought in alignment with the zFs devs this mi= ght not be a good idea... if zfs can=E2=80=99t work it out already, the best= thing to do will probably be get everything off it and reformat... =20 > Assuming the meta data can be fixed that would be the easy route. That=E2=80=99s the thing... I don=E2=80=99t know if it can be easily fixed..= . more I think the meta data can probably be easily fixed, but I suspect the= spacemap can=E2=80=99t and as such if it can=E2=80=99t there is going to be= one of two things... either a big hole (or multiple little ones) or the li= kelihood of new data overwriting partially or in full, old data and this wou= ld not be good.. > That sound not be hard to write if everything else on the disk has no > issues. Don't you say in another message that the system is now returning > 100's of drive errors. No, one disk in the 16 disk zRAID2 ... previously unseen but it could be th= e errors have occurred in the last 6 weeks... everytime I reboot it started r= esilvering, gets to 761M resilvered and then stops. > How does that relate the statement =3D>Everything on > the disk is fine except for a little bit of corruption in the freespace ma= p? Well I think it goes through until it hits that little bit of corruption at s= tops it mounting... then stops again.. I=E2=80=99m seeing 100s of hard errors at the beginning of one of the drives= .. they were reported in syslog but only just so could be a new thing. Coul= d be previously undetected.. no way to know. >=20 >=20 >>=20 >>>=20 >>> I have a friend/business partner that doesn't want to move to ZFS becaus= e >>> his recovery method is wait for a single drive (no-redundancy, sometimes= >> no >>> backup) to fail and then use ddrescue to image the broken drive to a new= >>> drive (ignoring any file corruption because you can't really tell withou= t >>> ZFS). He's been using disk rescue programs for so long that he will not >>> move to ZFS, because it doesn't have a disk rescue program. >>=20 >> The first part is rather cavilier .. the second part I kinda >> understand... its why I'm now looking at alternatives ... particularly >> being bitten as badly as I have with an unmountable volume. >>=20 >> On the system I managed for him, we had a system with ZFS crap out. I > restored it from a backup. I continue to believe that people running > systems without backups are living on borrowed time. The idea of relying o= n > a disk recovery tool is too risky for my taste. >=20 >=20 >>> He has systems >>> on Linux with ext3 and no mirroring or backups. I've asked about moving >>> them to a mirrored ZFS system and he has told me that the customer >> doesn't >>> want to pay for a second drive (but will pay for hours of his time to fi= x >>> the problem when it happens). You kind of sound like him. >> Yeah..no! I'd be having that on a second (mirrored) drive... like most >> of my production servers. >>=20 >>> ZFS is risky >>> because there isn't a good drive rescue program. >> ZFS is good for some applications. ZFS is good to prevent cosmic ray >> issues. ZFS is not good when things go wrong. ZFS doesn't usually go >> wrong. Think that about sums it up. >>=20 >> When it does go wrong I restore from backups. Therefore my systems don't > have problems. I sorry you had the perfect trifecta that caused you to los= e > multiple drives and all your backups at the same time. >=20 >=20 >>> Sun's design was that the >>> system should be redundant by default and checksum everything. If the >>> drives fail, replace them. If they fail too much or too fast, restore >> from >>> backup. Once the system had too much corruption, you can't recover/check= >>> for all the damage without a second off disk copy. If you have that off >>> disk, then you have backup. They didn't build for the standard use case >> as >>> found in PCs because the disk recover programs rarely get everything >> back, >>> therefore they can't be relied on to get you data back when your data is= >>> important. Many PC owners have brought PC mindset ideas to the "UNIX" >>> world. Sun's history predates Windows and Mac and comes from a >>> Mini/Mainframe mindset (were people tried not to guess about data >>> integrity). >> I came from the days of Sun. >>=20 >> Good then you should understand Sun's point of view. >=20 >=20 >>>=20 >>> Would a disk rescue program for ZFS be a good idea? Sure. Should the lac= k >>> of a disk recovery program stop you from using ZFS? No. If you think so,= >> I >>> suggest that you have your data integrity priorities in the wrong order >>> (focusing on small, rare events rather than the common base case). >> Common case in your assessment in the email would suggest backups are >> not needed unless you have a rare event of a multi-drive failure. Which >> I know you're not advocating, but it is this same circular argument... >> ZFS is so good it's never wrong we don't need no stinking recovery >> tools, oh but take backups if it does fail, but it won't because it's so >> good and you have to be running consumer hardware or doing something >> wrong or be very unlucky with failures... etc.. round and round we go, >> where ever she'll stop no-one knows. >>=20 >> I advocate 2-3 backups of any important system (at least one different > that the other, offsite if one can afford it). > I never said ZFS is so good we don't need backups (that would be a stupid > comment). As far as a recovery tool, those sound risky. I'd prefer > something without so much risk. >=20 > Make your own judgement, it is your time and data. I think ZFS is a great > filesystem that anyone using FreeBSD or Illumios should be using. >=20 >=20 > --=20 > The greatest dangers to liberty lurk in insidious encroachment by men of > zeal, well-meaning but without understanding. -- Justice Louis D. Brande= is > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Wed May 8 23:17:43 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9525415963CB for ; Wed, 8 May 2019 23:17:43 +0000 (UTC) (envelope-from SRS0=C6t7=TI=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 180B46C39E for ; Wed, 8 May 2019 23:17:43 +0000 (UTC) (envelope-from SRS0=C6t7=TI=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id AED4B28423; Thu, 9 May 2019 01:17:32 +0200 (CEST) Received: from illbsd.quip.test (ip-62-24-92-232.net.upcbroadband.cz [62.24.92.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 6791328416; Thu, 9 May 2019 01:17:31 +0200 (CEST) Subject: Re: ZFS... To: Pete French , freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> <8a53df38-a094-b14f-9b7d-8def8ce42491@quip.cz> <30d1cf2a-b80c-822b-11f9-2139532f7858@ingresso.co.uk> From: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <7b7b9165-8f83-5195-2e4a-4c0e7c85307d@quip.cz> Date: Thu, 9 May 2019 01:17:33 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Firefox/52.0 SeaMonkey/2.49.3 MIME-Version: 1.0 In-Reply-To: <30d1cf2a-b80c-822b-11f9-2139532f7858@ingresso.co.uk> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 180B46C39E X-Spamd-Bar: +++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [5.63 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; NEURAL_SPAM_SHORT(0.99)[0.990,0]; IP_SCORE(1.00)[ip: (0.70), ipnet: 94.124.104.0/21(0.35), asn: 42000(3.87), country: CZ(0.08)]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[quip.cz]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(1.00)[0.999,0]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: elsa.codelab.cz]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[4.105.124.94.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_SPAM_LONG(1.00)[1.000,0]; R_SPF_NA(0.00)[]; FORGED_SENDER(0.30)[000.fbsd@quip.cz,SRS0=C6t7=TI=quip.cz=000.fbsd@elsa.codelab.cz]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42000, ipnet:94.124.104.0/21, country:CZ]; FROM_NEQ_ENVFROM(0.00)[000.fbsd@quip.cz,SRS0=C6t7=TI=quip.cz=000.fbsd@elsa.codelab.cz]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 May 2019 23:17:43 -0000 Pete French wrote on 2019/05/03 14:28: > > > On 03/05/2019 13:11, Miroslav Lachman wrote: > >> I had this problem in the past too. I am not sure if it was on Dell or >> HP machine - controller presents first disk only in the boot time so I >> created small (10 - 15GB partition) on each disk and use them all in 4 >> way mirror. Cannot say if it was gmirror with UFS or ZFS mirroring. >> The rest of the each disk was used for ZFS RAIDZ. > > Snap :-) Thats exactly what I have done - but the bits you cant mirror > are the GPT partititons for bootcode. I was fiddling with hose of da0 > without realising it was now using da1 to boot. If it ever chooses da2 > or da3 then I will need to mirror it there too, so I have it partitioned > like that, but am currently using those as swap as it shows no signs of > wanting to boot from them for now. Time for some scripting :) This is what I have on the machine with weird controller # cat bin/zfs_bootcode_update.sh #!/bin/sh devs="ada0 ada1 ada2 ada3" for dev in $devs do echo -n "Updating ZFS bootcode on ${dev} ..." if ! /sbin/gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${dev} > /dev/null; then echo " error" exit 1 fi echo " done" done So it should be able to boot from any installed drive, no matter the order. Miroslav Lachman From owner-freebsd-stable@freebsd.org Thu May 9 07:46:54 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 670DF15A003D for ; Thu, 9 May 2019 07:46:54 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from kagate.punkt.de (kagate.punkt.de [217.29.33.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 276B982B6B for ; Thu, 9 May 2019 07:46:47 +0000 (UTC) (envelope-from hausen@punkt.de) Received: from hugo10.ka.punkt.de (hugo10.ka.punkt.de [217.29.44.10]) by gate2.intern.punkt.de with ESMTP id x497keiN098367; Thu, 9 May 2019 09:46:40 +0200 (CEST) Received: from [217.29.44.36] ([217.29.44.36]) by hugo10.ka.punkt.de (8.14.2/8.14.2) with ESMTP id x497kdq5021503; Thu, 9 May 2019 09:46:39 +0200 (CEST) (envelope-from hausen@punkt.de) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: "Patrick M. Hausen" In-Reply-To: <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> Date: Thu, 9 May 2019 09:46:39 +0200 Cc: FreeBSD-STABLE Mailing List Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: 276B982B6B X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of hausen@punkt.de designates 217.29.33.131 as permitted sender) smtp.mailfrom=hausen@punkt.de X-Spamd-Result: default: False [-2.12 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.975,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:217.29.32.0/20]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[punkt.de]; NEURAL_HAM_LONG(-1.00)[-0.998,0]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[mailin.pluspunkthosting.de,mailin.pluspunkthosting.de]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[131.33.29.217.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.22)[ipnet: 217.29.32.0/20(-0.62), asn: 16188(-0.50), country: DE(-0.01)]; NEURAL_HAM_SHORT(-0.56)[-0.564,0]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16188, ipnet:217.29.32.0/20, country:DE]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 07:46:54 -0000 Hi all, > Am 09.05.2019 um 00:55 schrieb Michelle Sullivan : > No, one disk in the 16 disk zRAID2 ... previously unseen but it could = be the errors have occurred in the last 6 weeks... everytime I reboot it = started resilvering, gets to 761M resilvered and then stops. 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely long scrubs. In general it is not recommended though I cannot find the source for that information quickly just now. Kind regards, Patrick --=20 punkt.de GmbH Internet - Dienstleistungen - Beratung Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 76133 Karlsruhe info@punkt.de http://punkt.de AG Mannheim 108285 Gf: Juergen Egeling From owner-freebsd-stable@freebsd.org Thu May 9 08:32:22 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1DA4115A18BD for ; Thu, 9 May 2019 08:32:22 +0000 (UTC) (envelope-from SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B7BB38489A for ; Thu, 9 May 2019 08:32:20 +0000 (UTC) (envelope-from SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id AEFAD28416; Thu, 9 May 2019 10:32:17 +0200 (CEST) Received: from illbsd.quip.test (ip-62-24-92-232.net.upcbroadband.cz [62.24.92.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 8815228411; Thu, 9 May 2019 10:32:16 +0200 (CEST) Subject: Re: ZFS... To: "Patrick M. Hausen" , Michelle Sullivan Cc: FreeBSD-STABLE Mailing List References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> From: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> Date: Thu, 9 May 2019 10:32:19 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Firefox/52.0 SeaMonkey/2.49.3 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: B7BB38489A X-Spamd-Bar: +++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [5.52 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_SPAM_SHORT(0.88)[0.880,0]; IP_SCORE(1.00)[ip: (0.70), ipnet: 94.124.104.0/21(0.35), asn: 42000(3.87), country: CZ(0.08)]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[quip.cz]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(1.00)[1.000,0]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: elsa.codelab.cz]; NEURAL_SPAM_LONG(1.00)[1.000,0]; RCVD_IN_DNSWL_NONE(0.00)[4.105.124.94.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; R_SPF_NA(0.00)[]; FORGED_SENDER(0.30)[000.fbsd@quip.cz,SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42000, ipnet:94.124.104.0/21, country:CZ]; FROM_NEQ_ENVFROM(0.00)[000.fbsd@quip.cz,SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 08:32:22 -0000 Patrick M. Hausen wrote on 2019/05/09 09:46: > Hi all, > >> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan : >> No, one disk in the 16 disk zRAID2 ... previously unseen but it could be the errors have occurred in the last 6 weeks... everytime I reboot it started resilvering, gets to 761M resilvered and then stops. > > 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely > long scrubs. In general it is not recommended though I cannot find the > source for that information quickly just now. Extremely slow scrub is an issue even on 4 disks RAIDZ. I already posted about it in the past. This scrub is running from Sunday 3AM. Time to go is big lie. Is was "19hXXm" 12 hour ago. pool: tank0 state: ONLINE scan: scrub in progress since Sun May 5 03:01:48 2019 10.8T scanned out of 12.7T at 30.4M/s, 18h39m to go 0 repaired, 84.72% done config: NAME STATE READ WRITE CKSUM tank0 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/disk0tank0 ONLINE 0 0 0 gpt/disk1tank0 ONLINE 0 0 0 gpt/disk2tank0 ONLINE 0 0 0 gpt/disk3tank0 ONLINE 0 0 0 Disks are OK, monitored by smartmontools. There is nothing odd, just the long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and scrub was slow with 1TB disks too. This machine - HP ML110 G8) was my first machine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11.2. Scrub was / is always about one week. (I tried some sysctl tuning without much gain) Miroslav Lachman From owner-freebsd-stable@freebsd.org Thu May 9 09:41:25 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6514115A3724 for ; Thu, 9 May 2019 09:41:25 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from cu01176b.smtpx.saremail.com (cu01176b.smtpx.saremail.com [195.16.151.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 44EDA86C72 for ; Thu, 9 May 2019 09:41:23 +0000 (UTC) (envelope-from borjam@sarenet.es) Received: from [172.16.8.250] (unknown [192.148.167.11]) by proxypop01.sare.net (Postfix) with ESMTPA id AD5C39DDE5C; Thu, 9 May 2019 11:41:14 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... From: Borja Marcos In-Reply-To: <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> Date: Thu, 9 May 2019 11:41:12 +0200 Cc: Walter Parker , freebsd-stable@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: 44EDA86C72 X-Spamd-Bar: ----- X-Spamd-Result: default: False [-5.29 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip4:195.16.151.0/24]; MV_CASE(0.50)[]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[smtp.sarenet.es,smtp.sarenet.es,smtp.sarenet.es]; DMARC_POLICY_ALLOW(-0.50)[sarenet.es,reject]; RCVD_IN_DNSWL_NONE(0.00)[151.151.16.195.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-2.50)[ip: (-6.93), ipnet: 195.16.128.0/19(-3.22), asn: 3262(-2.39), country: ES(0.04)]; NEURAL_HAM_SHORT(-0.93)[-0.931,0]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:3262, ipnet:195.16.128.0/19, country:ES]; FREEMAIL_CC(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 09:41:25 -0000 > On 9 May 2019, at 00:55, Michelle Sullivan wrote: >=20 >=20 >=20 > This is true, but I am of the thought in alignment with the zFs devs = this might not be a good idea... if zfs can=E2=80=99t work it out = already, the best thing to do will probably be get everything off it and = reformat=E2=80=A6 =20 That=E2=80=99s true, I would rescue what I could and create the pool = again but after testing the setup thoroughly. It would be worth to have a look at the excellent guide offered by the = FreeNAS people. It=E2=80=99s full of excellent advice and a priceless list of =E2=80=9Cdonts=E2=80=9D such as SATA port multipliers, = etc.=20 >=20 >> That sound not be hard to write if everything else on the disk has no >> issues. Don't you say in another message that the system is now = returning >> 100's of drive errors. >=20 > No, one disk in the 16 disk zRAID2 ... previously unseen but it could = be the errors have occurred in the last 6 weeks... everytime I reboot it = started resilvering, gets to 761M resilvered and then stops. That=E2=80=99s a really bad sign. It shouldn=E2=80=99t happen.=20 >> How does that relate the statement =3D>Everything on >> the disk is fine except for a little bit of corruption in the = freespace map? >=20 > Well I think it goes through until it hits that little bit of = corruption at stops it mounting... then stops again.. >=20 > I=E2=80=99m seeing 100s of hard errors at the beginning of one of the = drives.. they were reported in syslog but only just so could be a new = thing. Could be previously undetected.. no way to know. As for disk monitoring, smartmontools can be pretty good although only = as an indicator. I also monitor my systems using Orca (I wrote a crude = =E2=80=9Cdevilator=E2=80=9D many years ago) and I gather disk I/O statistics using GEOM of which the = read/write/delete/flush times are very valuable. An ailing disk can be = returning valid data but become very slow due to retries.=20 Borja. From owner-freebsd-stable@freebsd.org Thu May 9 11:02:47 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6B43515A530B for ; Thu, 9 May 2019 11:02:47 +0000 (UTC) (envelope-from dim@FreeBSD.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0A202895B1; Thu, 9 May 2019 11:02:47 +0000 (UTC) (envelope-from dim@FreeBSD.org) Received: from tensor.andric.com (tensor.andric.com [87.251.56.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "tensor.andric.com", Issuer "Let's Encrypt Authority X3" (verified OK)) (Authenticated sender: dim) by smtp.freebsd.org (Postfix) with ESMTPSA id 7F25611516; Thu, 9 May 2019 11:02:46 +0000 (UTC) (envelope-from dim@FreeBSD.org) Received: from [192.168.1.32] (92-111-45-98.static.v4.ziggozakelijk.nl [92.111.45.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by tensor.andric.com (Postfix) with ESMTPSA id A50EF50311; Thu, 9 May 2019 13:02:40 +0200 (CEST) From: Dimitry Andric Message-Id: Content-Type: multipart/signed; boundary="Apple-Mail=_469A87FD-8A9E-406E-BD92-0AA8426A6538"; protocol="application/pgp-signature"; micalg=pgp-sha1 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: ZFS... Date: Thu, 9 May 2019 13:02:35 +0200 In-Reply-To: <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> Cc: "Patrick M. Hausen" , Michelle Sullivan , FreeBSD-STABLE Mailing List To: Miroslav Lachman <000.fbsd@quip.cz> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> X-Mailer: Apple Mail (2.3445.104.8) X-Rspamd-Queue-Id: 0A202895B1 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.98 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; NEURAL_HAM_SHORT(-0.99)[-0.986,0]; ASN(0.00)[asn:11403, ipnet:96.47.64.0/20, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000,0] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:02:47 -0000 --Apple-Mail=_469A87FD-8A9E-406E-BD92-0AA8426A6538 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote: >=20 > Patrick M. Hausen wrote on 2019/05/09 09:46: >> Hi all, >>> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan = : >>> No, one disk in the 16 disk zRAID2 ... previously unseen but it = could be the errors have occurred in the last 6 weeks... everytime I = reboot it started resilvering, gets to 761M resilvered and then stops. >> 16 disks in *one* RAIDZ2 vdev? That might be the cause of your = insanely >> long scrubs. In general it is not recommended though I cannot find = the >> source for that information quickly just now. >=20 > Extremely slow scrub is an issue even on 4 disks RAIDZ. I already = posted about it in the past. This scrub is running from Sunday 3AM. > Time to go is big lie. Is was "19hXXm" 12 hour ago. >=20 > pool: tank0 > state: ONLINE > scan: scrub in progress since Sun May 5 03:01:48 2019 > 10.8T scanned out of 12.7T at 30.4M/s, 18h39m to go > 0 repaired, 84.72% done > config: >=20 > NAME STATE READ WRITE CKSUM > tank0 ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > gpt/disk0tank0 ONLINE 0 0 0 > gpt/disk1tank0 ONLINE 0 0 0 > gpt/disk2tank0 ONLINE 0 0 0 > gpt/disk3tank0 ONLINE 0 0 0 >=20 > Disks are OK, monitored by smartmontools. There is nothing odd, just = the long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) = and scrub was slow with 1TB disks too. This machine - HP ML110 G8) was = my first machine with ZFS. If I remember it well it was FreeBSD 7.0, now = running 11.2. Scrub was / is always about one week. (I tried some sysctl = tuning without much gain) Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which greatly speeds up scrubs and resilvers, was not in 11.2 (since it was cut at r334458). If you could update to a more recent snapshot, or try the upcoming 11.3 prereleases, you will hopefully see much shorter scrub times. -Dimitry --Apple-Mail=_469A87FD-8A9E-406E-BD92-0AA8426A6538 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=signature.asc Content-Type: application/pgp-signature; name=signature.asc Content-Description: Message signed with OpenPGP -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.2 iF0EARECAB0WIQR6tGLSzjX8bUI5T82wXqMKLiCWowUCXNQIywAKCRCwXqMKLiCW o6PJAJ46s0gYN0kphqx0InDDAuwcTB7V3QCg3z576q235LH8tByPQE4fhWUMVNY= =Sokp -----END PGP SIGNATURE----- --Apple-Mail=_469A87FD-8A9E-406E-BD92-0AA8426A6538-- From owner-freebsd-stable@freebsd.org Thu May 9 11:11:54 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 71F2215A55FC for ; Thu, 9 May 2019 11:11:54 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 02B5189AAC; Thu, 9 May 2019 11:11:49 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR800MJLIEQ0530@hades.sorbs.net>; Thu, 09 May 2019 04:25:42 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Thu, 09 May 2019 21:11:36 +1000 Cc: Miroslav Lachman <000.fbsd@quip.cz>, "Patrick M. Hausen" , FreeBSD-STABLE Mailing List Message-id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@qu> To: Dimitry Andric X-Rspamd-Queue-Id: 02B5189AAC X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.59 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.87)[-0.874,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-0.998,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[battlestar.sorbs.net,anaconda.sorbs.net,ninja.sorbs.net,catapilla.sorbs.net,scorpion.sorbs.net,desperado.sorbs.net]; NEURAL_HAM_SHORT(-0.14)[-0.138,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.32)[ip: (-0.81), ipnet: 72.12.192.0/19(-0.42), asn: 11114(-0.33), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:11:54 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 21:02, Dimitry Andric wrote: > >> On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote: >> >> Patrick M. Hausen wrote on 2019/05/09 09:46: >>> Hi all, >>>> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan : >>>> No, one disk in the 16 disk zRAID2 ... previously unseen but it could be the errors have occurred in the last 6 weeks... everytime I reboot it started resilvering, gets to 761M resilvered and then stops. >>> 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely >>> long scrubs. In general it is not recommended though I cannot find the >>> source for that information quickly just now. >> >> Extremely slow scrub is an issue even on 4 disks RAIDZ. I already posted about it in the past. This scrub is running from Sunday 3AM. >> Time to go is big lie. Is was "19hXXm" 12 hour ago. >> >> pool: tank0 >> state: ONLINE >> scan: scrub in progress since Sun May 5 03:01:48 2019 >> 10.8T scanned out of 12.7T at 30.4M/s, 18h39m to go >> 0 repaired, 84.72% done >> config: >> >> NAME STATE READ WRITE CKSUM >> tank0 ONLINE 0 0 0 >> raidz1-0 ONLINE 0 0 0 >> gpt/disk0tank0 ONLINE 0 0 0 >> gpt/disk1tank0 ONLINE 0 0 0 >> gpt/disk2tank0 ONLINE 0 0 0 >> gpt/disk3tank0 ONLINE 0 0 0 >> >> Disks are OK, monitored by smartmontools. There is nothing odd, just the long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and scrub was slow with 1TB disks too. This machine - HP ML110 G8) was my first machine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11.2. Scrub was / is always about one week. (I tried some sysctl tuning without much gain) > > Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which > greatly speeds up scrubs and resilvers, was not in 11.2 (since it was > cut at r334458). > > If you could update to a more recent snapshot, or try the upcoming 11.3 > prereleases, you will hopefully see much shorter scrub times. > > -Dimitry > From owner-freebsd-stable@freebsd.org Thu May 9 11:15:17 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 129D915A5836 for ; Thu, 9 May 2019 11:15:17 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 333B589DE9 for ; Thu, 9 May 2019 11:15:16 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR800MKBIKP0530@hades.sorbs.net> for freebsd-stable@freebsd.org; Thu, 09 May 2019 04:29:16 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Thu, 09 May 2019 21:15:11 +1000 Cc: Walter Parker , freebsd-stable@freebsd.org Content-transfer-encoding: quoted-printable Message-id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Borja Marcos X-Rspamd-Queue-Id: 333B589DE9 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.28 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.90)[-0.901,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; NEURAL_HAM_SHORT(-0.30)[-0.301,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.32)[ip: (-0.79), ipnet: 72.12.192.0/19(-0.41), asn: 11114(-0.33), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; FREEMAIL_CC(0.00)[gmail.com]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:15:17 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 19:41, Borja Marcos wrote: >=20 >=20 >=20 >> On 9 May 2019, at 00:55, Michelle Sullivan wrote: >>=20 >>=20 >>=20 >> This is true, but I am of the thought in alignment with the zFs devs this= might not be a good idea... if zfs can=E2=80=99t work it out already, the b= est thing to do will probably be get everything off it and reformat=E2=80=A6= =20 >=20 > That=E2=80=99s true, I would rescue what I could and create the pool again= but after testing the setup thoroughly. >=20 +1 > It would be worth to have a look at the excellent guide offered by the Fre= eNAS people. It=E2=80=99s full of excellent advice and a > priceless list of =E2=80=9Cdonts=E2=80=9D such as SATA port multipliers, e= tc.=20 >=20 Yeah already worked out over time port multipliers can=E2=80=99t be good. >>=20 >>> That sound not be hard to write if everything else on the disk has no >>> issues. Don't you say in another message that the system is now returnin= g >>> 100's of drive errors. >>=20 >> No, one disk in the 16 disk zRAID2 ... previously unseen but it could be= the errors have occurred in the last 6 weeks... everytime I reboot it start= ed resilvering, gets to 761M resilvered and then stops. >=20 > That=E2=80=99s a really bad sign. It shouldn=E2=80=99t happen.=20 That=E2=80=99s since the metadata corruption. That is probably part of the p= roblem. >=20 >>> How does that relate the statement =3D>Everything on >>> the disk is fine except for a little bit of corruption in the freespace m= ap? >>=20 >> Well I think it goes through until it hits that little bit of corruption a= t stops it mounting... then stops again.. >>=20 >> I=E2=80=99m seeing 100s of hard errors at the beginning of one of the dri= ves.. they were reported in syslog but only just so could be a new thing. C= ould be previously undetected.. no way to know. >=20 > As for disk monitoring, smartmontools can be pretty good although only as a= n indicator. I also monitor my systems using Orca (I wrote a crude =E2=80=9C= devilator=E2=80=9D many years > ago) and I gather disk I/O statistics using GEOM of which the read/write/d= elete/flush times are very valuable. An ailing disk can be returning valid d= ata but become very slow due to retries.=20 Yes, though often these will show up in syslog (something I monitor religiou= sly... though I concede that when it hits syslog it=E2=80=99s probably alre= ady and urgent issue. Michelle= From owner-freebsd-stable@freebsd.org Thu May 9 11:17:31 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9DAD315A597C for ; Thu, 9 May 2019 11:17:31 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 15C9B8A000 for ; Thu, 9 May 2019 11:17:31 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR800MKRIOH0530@hades.sorbs.net> for freebsd-stable@freebsd.org; Thu, 09 May 2019 04:31:32 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Thu, 09 May 2019 21:17:26 +1000 Cc: FreeBSD-STABLE Mailing List Content-transfer-encoding: quoted-printable Message-id: <0FCA574F-D266-404A-B350-2275BB3BD784@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: "Patrick M. Hausen" X-Rspamd-Queue-Id: 15C9B8A000 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.36 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.92)[-0.925,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.31)[ip: (-0.78), ipnet: 72.12.192.0/19(-0.41), asn: 11114(-0.32), country: US(-0.06)]; NEURAL_HAM_SHORT(-0.37)[-0.366,0]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:17:31 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 17:46, Patrick M. Hausen wrote: >=20 > Hi all, >=20 >> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan : >> No, one disk in the 16 disk zRAID2 ... previously unseen but it could be= the errors have occurred in the last 6 weeks... everytime I reboot it start= ed resilvering, gets to 761M resilvered and then stops. >=20 > 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely > long scrubs. In general it is not recommended though I cannot find the > source for that information quickly just now. I have seen posts on various lists stating don=E2=80=99t go over 8.. I know= people in Oracle, the word is it should matter... who do you believe? Michelle >=20 > Kind regards, > Patrick > --=20 > punkt.de GmbH Internet - Dienstleistungen - Beratung > Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 > 76133 Karlsruhe info@punkt.de http://punkt.de > AG Mannheim 108285 Gf: Juergen Egeling >=20 From owner-freebsd-stable@freebsd.org Thu May 9 11:27:40 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5386F15A5DC6 for ; Thu, 9 May 2019 11:27:40 +0000 (UTC) (envelope-from rb@gid.co.uk) Received: from mx0.gid.co.uk (mx0.gid.co.uk [194.32.164.250]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A16938A546 for ; Thu, 9 May 2019 11:27:29 +0000 (UTC) (envelope-from rb@gid.co.uk) Received: from [194.32.164.27] ([194.32.164.27]) by mx0.gid.co.uk (8.14.2/8.14.2) with ESMTP id x49BRM2i046518; Thu, 9 May 2019 12:27:22 +0100 (BST) (envelope-from rb@gid.co.uk) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) Subject: Re: ZFS... From: Bob Bishop In-Reply-To: <0FCA574F-D266-404A-B350-2275BB3BD784@sorbs.net> Date: Thu, 9 May 2019 12:27:22 +0100 Cc: "Patrick M. Hausen" , FreeBSD-STABLE Mailing List Content-Transfer-Encoding: quoted-printable Message-Id: References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <0FCA574F-D266-404A-B350-2275BB3BD784@sorbs.net> To: Michelle Sullivan X-Mailer: Apple Mail (2.3273) X-Rspamd-Queue-Id: A16938A546 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of rb@gid.co.uk designates 194.32.164.250 as permitted sender) smtp.mailfrom=rb@gid.co.uk X-Spamd-Result: default: False [-2.74 / 15.00]; MX_INVALID(0.50)[greylisted]; R_SPF_ALLOW(-0.20)[+mx]; MV_CASE(0.50)[]; TO_DN_ALL(0.00)[]; NEURAL_HAM_SHORT(-0.95)[-0.945,0]; SUBJ_ALL_CAPS(0.45)[6]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42831, ipnet:194.32.164.0/24, country:GB]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.99)[-0.992,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[gid.co.uk]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[250.164.32.194.list.dnswl.org : 127.0.10.0]; IP_SCORE(-0.95)[ip: (-2.93), ipnet: 194.32.164.0/24(-1.46), asn: 42831(-0.28), country: GB(-0.09)]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:27:40 -0000 > On 9 May 2019, at 12:17, Michelle Sullivan wrote: >=20 >=20 >=20 > Michelle Sullivan > http://www.mhix.org/ > Sent from my iPad >=20 >> On 09 May 2019, at 17:46, Patrick M. Hausen wrote: >>=20 >> Hi all, >>=20 >>> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan = : >>> No, one disk in the 16 disk zRAID2 ... previously unseen but it = could be the errors have occurred in the last 6 weeks... everytime I = reboot it started resilvering, gets to 761M resilvered and then stops. >>=20 >> 16 disks in *one* RAIDZ2 vdev? That might be the cause of your = insanely >> long scrubs. In general it is not recommended though I cannot find = the >> source for that information quickly just now. >=20 > I have seen posts on various lists stating don=E2=80=99t go over 8.. = I know people in Oracle, the word is it should matter... who do you = believe? Inter alia it depends on the quality/bandwidth of disk controllers. > Michelle >=20 >>=20 >> Kind regards, >> Patrick >> --=20 >> punkt.de GmbH Internet - Dienstleistungen - Beratung >> Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 >> 76133 Karlsruhe info@punkt.de http://punkt.de >> AG Mannheim 108285 Gf: Juergen Egeling >>=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" -- Bob Bishop t: +44 (0)118 940 1243 rb@gid.co.uk m: +44 (0)783 626 4518 From owner-freebsd-stable@freebsd.org Thu May 9 11:36:45 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 13BB015A649A for ; Thu, 9 May 2019 11:36:45 +0000 (UTC) (envelope-from SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9C0C78ACDF; Thu, 9 May 2019 11:36:43 +0000 (UTC) (envelope-from SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 5C30928417; Thu, 9 May 2019 13:36:41 +0200 (CEST) Received: from illbsd.quip.test (ip-62-24-92-232.net.upcbroadband.cz [62.24.92.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 4F89D28416; Thu, 9 May 2019 13:36:40 +0200 (CEST) Subject: Re: ZFS... To: Dimitry Andric Cc: FreeBSD-STABLE Mailing List References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> From: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> Date: Thu, 9 May 2019 13:36:42 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Firefox/52.0 SeaMonkey/2.49.3 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 9C0C78ACDF X-Spamd-Bar: +++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [5.60 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: elsa.codelab.cz]; RCPT_COUNT_TWO(0.00)[2]; SUBJ_ALL_CAPS(0.45)[6]; FORGED_SENDER(0.30)[000.fbsd@quip.cz,SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42000, ipnet:94.124.104.0/21, country:CZ]; FROM_NEQ_ENVFROM(0.00)[000.fbsd@quip.cz,SRS0=jwo/=TJ=quip.cz=000.fbsd@elsa.codelab.cz]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_SPAM_SHORT(0.97)[0.969,0]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[quip.cz]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(1.00)[1.000,0]; IP_SCORE(1.00)[ip: (0.69), ipnet: 94.124.104.0/21(0.34), asn: 42000(3.87), country: CZ(0.08)]; NEURAL_SPAM_LONG(1.00)[1.000,0]; RCVD_IN_DNSWL_NONE(0.00)[4.105.124.94.list.dnswl.org : 127.0.10.0]; R_SPF_NA(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:36:45 -0000 Dimitry Andric wrote on 2019/05/09 13:02: > On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote: [...] >> Disks are OK, monitored by smartmontools. There is nothing odd, just the long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and scrub was slow with 1TB disks too. This machine - HP ML110 G8) was my first machine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11.2. Scrub was / is always about one week. (I tried some sysctl tuning without much gain) > > Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which > greatly speeds up scrubs and resilvers, was not in 11.2 (since it was > cut at r334458). > > If you could update to a more recent snapshot, or try the upcoming 11.3 > prereleases, you will hopefully see much shorter scrub times. Thank you. I will try 11-STABLE / 11.3-PRERELEASE soon and let you know about the difference. Kind regards Miroslav Lachman From owner-freebsd-stable@freebsd.org Thu May 9 11:41:51 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 52ED115A6739 for ; Thu, 9 May 2019 11:41:51 +0000 (UTC) (envelope-from pblok@bsd4all.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id B4AFF8AF44 for ; Thu, 9 May 2019 11:41:50 +0000 (UTC) (envelope-from pblok@bsd4all.org) Received: by mailman.ysv.freebsd.org (Postfix) id 7577015A6716; Thu, 9 May 2019 11:41:50 +0000 (UTC) Delivered-To: stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 530D515A6711 for ; Thu, 9 May 2019 11:41:50 +0000 (UTC) (envelope-from pblok@bsd4all.org) Received: from smtpq1.mnd.mail.iss.as9143.net (smtpq1.mnd.mail.iss.as9143.net [212.54.34.164]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 34F998AF3B for ; Thu, 9 May 2019 11:41:48 +0000 (UTC) (envelope-from pblok@bsd4all.org) Received: from [212.54.34.118] (helo=smtp10.mnd.mail.iss.as9143.net) by smtpq1.mnd.mail.iss.as9143.net with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1hOhQg-0003p8-Kl; Thu, 09 May 2019 13:41:38 +0200 Received: from 94-209-122-217.cable.dynamic.v4.ziggo.nl ([94.209.122.217] helo=wan0.bsd4all.org) by smtp10.mnd.mail.iss.as9143.net with esmtp (Exim 4.86_2) (envelope-from ) id 1hOhQg-0002ep-Gr; Thu, 09 May 2019 13:41:38 +0200 Received: from newnas (localhost [127.0.0.1]) by wan0.bsd4all.org (Postfix) with ESMTP id DAC29121; Thu, 9 May 2019 13:41:37 +0200 (CEST) X-Virus-Scanned: amavisd-new at bsd4all.org Received: from wan0.bsd4all.org ([127.0.0.1]) by newnas (newnas.bsd4all.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eAtRhM8WLcxL; Thu, 9 May 2019 13:41:36 +0200 (CEST) Received: from [192.168.1.65] (unknown [192.168.1.65]) by wan0.bsd4all.org (Postfix) with ESMTPSA id 90FB2116; Thu, 9 May 2019 13:41:36 +0200 (CEST) From: Peter Blok Message-Id: Content-Type: multipart/signed; boundary="Apple-Mail=_D033D56B-B392-432D-A5E6-74AF80C900FE"; protocol="application/pkcs7-signature"; micalg=sha-256 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: route based ipsec Date: Thu, 9 May 2019 13:41:36 +0200 In-Reply-To: <83f4e225-b767-72ee-43df-52163271ce8e@grosbein.net> Cc: KOT MATPOCKuH , "Andrey V. Elsukov" , stable@freebsd.org To: Eugene Grosbein References: <83f4e225-b767-72ee-43df-52163271ce8e@grosbein.net> X-Mailer: Apple Mail (2.3445.104.8) X-SourceIP: 94.209.122.217 X-Ziggo-spambar: / X-Ziggo-spamscore: 0.0 X-Ziggo-spamreport: CMAE Analysis: v=2.3 cv=AMnWcezf c=1 sm=1 tr=0 a=0XONDDbZk2SpjknwKA3Xxg==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=E5NmQfObTbMA:10 a=H0GPC0OhAAAA:8 a=6I5d2MoRAAAA:8 a=nap-lZuEgHDgqEnL998A:9 a=QEXdDO2ut3YA:10 a=OPcFMwLL4aR0DNf0FsYA:9 a=ZVk8-NSrHBgA:10 a=KczGKrPSgCPlefTG41c3:22 a=IjZwj45LgO3ly-622nXo:22 X-Ziggo-Spam-Status: No X-Spam-Status: No X-Spam-Flag: No X-Rspamd-Queue-Id: 34F998AF3B X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of pblok@bsd4all.org designates 212.54.34.164 as permitted sender) smtp.mailfrom=pblok@bsd4all.org X-Spamd-Result: default: False [-5.40 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+a:smtp.ziggo.nl/16]; MV_CASE(0.50)[]; HAS_ATTACHMENT(0.00)[]; MX_GOOD(-0.01)[smtp.bsd4all.org]; NEURAL_HAM_SHORT(-0.96)[-0.962,0]; RCVD_IN_DNSWL_LOW(-0.10)[164.34.54.212.list.dnswl.org : 127.0.5.1]; MIME_TRACE(0.00)[0:+,1:+]; IP_SCORE(-0.43)[ipnet: 212.54.32.0/20(-2.28), asn: 33915(0.15), country: NL(0.01)]; FROM_EQ_ENVFROM(0.00)[]; ASN(0.00)[asn:33915, ipnet:212.54.32.0/20, country:NL]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[217.122.209.94.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; RCVD_COUNT_FIVE(0.00)[6]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; RCPT_COUNT_THREE(0.00)[4]; MIME_GOOD(-0.20)[multipart/signed,text/plain]; DMARC_NA(0.00)[bsd4all.org]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; R_DKIM_NA(0.00)[]; FREEMAIL_CC(0.00)[gmail.com]; RCVD_TLS_LAST(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 11:41:51 -0000 --Apple-Mail=_D033D56B-B392-432D-A5E6-74AF80C900FE Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 I have tried certificates in the past, but racoon never worked stable = enough. Didn=E2=80=99t crash on me though. I have moved over to Strongswan and never regretted this move. Very = stable. Peter > On 8 May 2019, at 03:29, Eugene Grosbein wrote: >=20 > 08.05.2019 3:23, KOT MATPOCKuH wrote: >=20 >> I'm misunderstand what in my configuration can result core dumps a = running >> daemon... >> I'm attached a sample racoon.conf. Can You check for possible = problems? >> Also on one host I got a crash in another function: >> (gdb) bt >> #0 0x000000000024717f in privsep_init () >> #1 0x00000000002375f4 in inscontacted () >> #2 0x00000000002337d0 in isakmp_plist_set_all () >> #3 0x000000000023210d in isakmp_ph2expire () >> #4 0x000000000023162a in isakmp_ph1delete () >> #5 0x000000000023110b in isakmp_ph2resend () >> #6 0x00000008002aa000 in ?? () >> #7 0x0000000000000000 in ?? () >=20 > I guess configuration using certificates is not tested enough. > It works stable for me but I use psk only. >=20 > You need to fix code yourself or stop using racoon with certificates. >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" --Apple-Mail=_D033D56B-B392-432D-A5E6-74AF80C900FE Content-Disposition: attachment; filename=smime.p7s Content-Type: application/pkcs7-signature; name=smime.p7s Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCBSAw ggUcMIIEBKADAgECAhEAq2wFIs+rCK6H6/2jbblXhDANBgkqhkiG9w0BAQsFADCBlzELMAkGA1UE BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTgwNDE0MDAwMDAwWhcNMjEwNDEzMjM1 OTU5WjBEMQswCQYDVQQGEwJOTDETMBEGA1UEAxMKUGV0ZXIgQmxvazEgMB4GCSqGSIb3DQEJARYR cGJsb2tAYnNkNGFsbC5vcmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDPT/3evs2a zLSIVepGa9qFVcSISd5HzoJt9xAyQ4od7NM6Qzwm446OyhzWsIN/a6+nDNB4AxzSg00QXKx4afEa FrdLzmREEfv24f88j2UZYqHAls0j26jyED5FZ068xs4gWZBG2U7EVTUNNJuUrrmqBNZkGxTIrFrD Cgr1EpRULpN+HrEelHHh7uR0twAjvwcyXkG9DbDJXnw8HzKGR80ik4+13HDxx4mDxOY4NOvWSSiM kEFS2Z2AKtxXSMBQZHazAUvbka27c1m93/QsjnDF+P6Aef9NEvUDL9mU9Jbf/+5V+anT2KdPGP4p rQ9gA/Nup61qxDkwc+RupiXD5NSbAgMBAAGjggGzMIIBrzAfBgNVHSMEGDAWgBSCr2yM+MX+lmF8 6B89K3FIXsSLwDAdBgNVHQ4EFgQUjwe7n1zvxFkTeCUYWrsaJpOGP14wDgYDVR0PAQH/BAQDAgWg MAwGA1UdEwEB/wQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMCMEYGA1UdIAQ/MD0w OwYMKwYBBAGyMQECAQMFMCswKQYIKwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5uZXQv Q1BTMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0NPTU9ET1JTQUNs aWVudEF1dGhlbnRpY2F0aW9uYW5kU2VjdXJlRW1haWxDQS5jcmwwgYsGCCsGAQUFBwEBBH8wfTBV BggrBgEFBQcwAoZJaHR0cDovL2NydC5jb21vZG9jYS5jb20vQ09NT0RPUlNBQ2xpZW50QXV0aGVu dGljYXRpb25hbmRTZWN1cmVFbWFpbENBLmNydDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29t b2RvY2EuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQC85hVlqTVwt218IJR/WjMiMnDtZ7hY860XKjzO uB3sUUQwHxHj+ZYuMbAfVLZGGqh1EekbwDMVgkK9cezIHM+ZzxrNGX2SJyl1YW+3FLn52P0uIlmA VPFjUowf5qBhOHl2NJo+WXYZhQY7rT/xSygE81o3oLE/A4zO6WtO3PeZpFpZNrBvizAsjTDfPeXW iQzXz6NLrgwert0Wml95ov2rG5oCzHYPijabubSNm2NdUjPRtcVylcqAThXOvp6X4UvW8/L0uhkp 9WsKP2JEJ3Zukv7Ib+vMBsdE4tf4rmv89pQC+lLpD08ze/QDCIeFBCRIihcC2PycDQrnNIp1RAIh MYIDyjCCA8YCAQEwga0wgZcxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0 ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMT0wOwYDVQQD EzRDT01PRE8gUlNBIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgU2VjdXJlIEVtYWlsIENBAhEA q2wFIs+rCK6H6/2jbblXhDANBglghkgBZQMEAgEFAKCCAe0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3 DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNTA5MTE0MTM2WjAvBgkqhkiG9w0BCQQxIgQg3e/gusP3 KjhAq1Nio3kpTttoEFiMknOvetoWWJsQGh0wgb4GCSsGAQQBgjcQBDGBsDCBrTCBlzELMAkGA1UE BhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgG A1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMTNENPTU9ETyBSU0EgQ2xpZW50IEF1dGhl bnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQCrbAUiz6sIrofr/aNtuVeEMIHABgsqhkiG 9w0BCRACCzGBsKCBrTCBlzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3Rl cjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxPTA7BgNVBAMT NENPTU9ETyBSU0EgQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQCr bAUiz6sIrofr/aNtuVeEMA0GCSqGSIb3DQEBAQUABIIBAKUyGJXeLYOh/2kMW2TpxRiUkty0MoH9 +SOBwlSVE8iNvYMvUgoN9I8N6QgAxluyg/BP0YlRjZ9XdRlUZWGG3suXCYk9eLcjJqdYpnZfZnMe pJJWRCL9SdclyVDMfYuxJQLNImnHrlwMvyeLeINtlxhKCMQUbCBcENc4bu2itzvZGYMMkPZ87BsX es/nLZXXBKarAGu+ef+5E4qADAC4ZCbOv5je+P0vwaPdd1IsBk55XOeR7Ce5nPv5NFqWx/8x6KGl Ept0RpoKVTACQQoIUrhNJ+9M85163p7WF6/GTPZmC6YtQ8VIWlNiTUJHJqO2jRrSP7QHyZEYj3Hz 5XaNG+0AAAAAAAA= --Apple-Mail=_D033D56B-B392-432D-A5E6-74AF80C900FE-- From owner-freebsd-stable@freebsd.org Thu May 9 12:56:25 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 43E6D15A8CE1 for ; Thu, 9 May 2019 12:56:25 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-lj1-f173.google.com (mail-lj1-f173.google.com [209.85.208.173]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 046198DF21; Thu, 9 May 2019 12:56:23 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-lj1-f173.google.com with SMTP id k8so1906514lja.8; Thu, 09 May 2019 05:56:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=vyjRYILfzc7hyNOQgWlqG09zVjG7et4fASUpb65BNkc=; b=E5RVqeS4TdziTXZRqxz6M5plIPq9DtDlZdndBzRI3V1DoZjD6tFFOu+g5dNe0gvpBn 1H75A+nOn6UyYuXMqo/10YYrYMP/P+qyrTARSN096C0QMk+hMiuaxLBwDZ9kdF6xw3Ae HtcDo9ew38M8LsAKK9wg+23w51UmrfszbXlAoqcDvO66XAa5TAFfwUXosNeIPto/GRg+ 0mwGA2T2P05glXbTFaAcMeTcArvbebJZ/RJ8Xab/t60yL11ZOmZx4Dshucd1vy5pfSW/ CEFlHBx+EmXSmnG218C4WhtfwwYYyqH2+DayEk+S/eZx1wjr9l/SN2VftY58dBEk2vDh mdtQ== X-Gm-Message-State: APjAAAWGwSVnb1RLDsgFDc6/LDef1nUsXCdcat3OddIalmDfuDJQD1um CqDvnbFzK7xdKS3gu86Qa2yOU1Q16YTlINhCPHg= X-Google-Smtp-Source: APXvYqzkW1StO0qOdoWqVq9bdDtgC5qjGFf7EZ3uNvizMv4AzxCBz6Wr2SAEYZyDAj2DofTWIKNhuyEbUe0YxpKeTRQ= X-Received: by 2002:a2e:1654:: with SMTP id 20mr2220939ljw.53.1557406222162; Thu, 09 May 2019 05:50:22 -0700 (PDT) MIME-Version: 1.0 References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> In-Reply-To: <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> From: Alan Somers Date: Thu, 9 May 2019 06:50:10 -0600 Message-ID: Subject: Re: ZFS... To: Miroslav Lachman <000.fbsd@quip.cz> Cc: Dimitry Andric , FreeBSD-STABLE Mailing List Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 046198DF21 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of asomers@gmail.com designates 209.85.208.173 as permitted sender) smtp.mailfrom=asomers@gmail.com X-Spamd-Result: default: False [-3.63 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-0.997,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[freebsd.org]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; NEURAL_HAM_SHORT(-0.76)[-0.758,0]; RCVD_IN_DNSWL_NONE(0.00)[173.208.85.209.list.dnswl.org : 127.0.5.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-1.31)[ip: (-0.53), ipnet: 209.85.128.0/17(-3.71), asn: 15169(-2.26), country: US(-0.06)]; FORGED_SENDER(0.30)[asomers@freebsd.org,asomers@gmail.com]; MIME_TRACE(0.00)[0:+]; R_DKIM_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; FROM_NEQ_ENVFROM(0.00)[asomers@freebsd.org,asomers@gmail.com]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 12:56:25 -0000 On Thu, May 9, 2019 at 5:37 AM Miroslav Lachman <000.fbsd@quip.cz> wrote: > > Dimitry Andric wrote on 2019/05/09 13:02: > > On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote: > > [...] > > >> Disks are OK, monitored by smartmontools. There is nothing odd, just t= he long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and = scrub was slow with 1TB disks too. This machine - HP ML110 G8) was my first= machine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11= .2. Scrub was / is always about one week. (I tried some sysctl tuning witho= ut much gain) > > > > Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which > > greatly speeds up scrubs and resilvers, was not in 11.2 (since it was > > cut at r334458). > > > > If you could update to a more recent snapshot, or try the upcoming 11.3 > > prereleases, you will hopefully see much shorter scrub times. > > Thank you. I will try 11-STABLE / 11.3-PRERELEASE soon and let you know > about the difference. > > Kind regards > Miroslav Lachman On 11.3 and even much older releases, you can greatly speed up scrub and resilver by tweaking some sysctls. If you have spinning rust, raise vfs.zfs.top_maxinflight so they'll do fewer seeks. I used to set it to 8192 on machines with 32GB of RAM. Raising vfs.zfs.resilver_min_time_ms to 5000 helps a little, too. -Alan From owner-freebsd-stable@freebsd.org Thu May 9 13:38:14 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5887415A9C2F for ; Thu, 9 May 2019 13:38:14 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from constantine.ingresso.co.uk (constantine.ingresso.co.uk [31.24.6.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 40E3F8F97B for ; Thu, 9 May 2019 13:38:13 +0000 (UTC) (envelope-from petefrench@ingresso.co.uk) Received: from [2001:470:6cc4:1:225:ff:fe46:71cf] (helo=foula.local) by constantine.ingresso.co.uk with esmtpsa (TLSv1.3:TLS_AES_128_GCM_SHA256:128) (Exim 4.92 (FreeBSD)) (envelope-from ) id 1hOjFN-000Dsp-L1 for freebsd-stable@freebsd.org; Thu, 09 May 2019 13:38:05 +0000 Subject: Re: ZFS... To: freebsd-stable@freebsd.org References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <58DA896C-5312-47BC-8887-7680941A9AF2@sarenet.es> <8a53df38-a094-b14f-9b7d-8def8ce42491@quip.cz> <30d1cf2a-b80c-822b-11f9-2139532f7858@ingresso.co.uk> <7b7b9165-8f83-5195-2e4a-4c0e7c85307d@quip.cz> From: Pete French Message-ID: <27cb15ac-eee8-1cbc-a15f-b2f0746b07c5@ingresso.co.uk> Date: Thu, 9 May 2019 14:38:17 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:67.0) Gecko/20100101 Thunderbird/67.0 MIME-Version: 1.0 In-Reply-To: <7b7b9165-8f83-5195-2e4a-4c0e7c85307d@quip.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 40E3F8F97B X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org; dmarc=pass (policy=none) header.from=ingresso.co.uk; spf=pass (mx1.freebsd.org: domain of petefrench@ingresso.co.uk designates 31.24.6.74 as permitted sender) smtp.mailfrom=petefrench@ingresso.co.uk X-Spamd-Result: default: False [-6.65 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:31.24.6.74]; TO_DN_NONE(0.00)[]; MX_GOOD(-0.01)[us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mimecast.com, us-smtp-inbound-1.mimecast.com, us-smtp-inbound-2.mi mecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com,us-smtp-inbound-1.mimecast.com,us-smtp-inbound-2.mimecast.com]; DMARC_POLICY_ALLOW(-0.50)[ingresso.co.uk,none]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_HAM_SHORT(-0.95)[-0.949,0]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:16082, ipnet:31.24.0.0/21, country:GB]; MID_RHS_MATCH_FROM(0.00)[]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; IP_SCORE(-3.34)[ip: (-9.66), ipnet: 31.24.0.0/21(-4.83), asn: 16082(-2.11), country: GB(-0.09)]; RCPT_COUNT_ONE(0.00)[1]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 13:38:14 -0000 On 09/05/2019 00:17, Miroslav Lachman wrote: > Time for some scripting :) This is what I have on the machine with weird > controller > > # cat bin/zfs_bootcode_update.sh > #!/bin/sh > > devs="ada0 ada1 ada2 ada3" > > for dev in $devs > do >         echo -n "Updating ZFS bootcode on ${dev} ..." >         if ! /sbin/gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i > 1 ${dev} > /dev/null; then >                 echo " error" >                 exit 1 >         fi >         echo " done" > done heh.... pete@skerry ~]$ cat /root/update_boot_blocks #!/bin/sh for DRIVE in ada0 ada1 do /sbin/gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${DRIVE} done You bother to check for errors, I dont :-) From owner-freebsd-stable@freebsd.org Thu May 9 13:43:15 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0BAFF15A9EF9 for ; Thu, 9 May 2019 13:43:15 +0000 (UTC) (envelope-from karl@denninger.net) Received: from colo1.denninger.net (colo1.denninger.net [104.236.120.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 16B858FDC8 for ; Thu, 9 May 2019 13:43:13 +0000 (UTC) (envelope-from karl@denninger.net) Received: from denninger.net (ip68-1-57-197.pn.at.cox.net [68.1.57.197]) by colo1.denninger.net (Postfix) with ESMTP id 7368621109D for ; Thu, 9 May 2019 09:43:12 -0400 (EDT) Received: from [192.168.10.24] (D14.Denninger.Net [192.168.10.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by denninger.net (Postfix) with ESMTPSA id E28FBF37D8 for ; Thu, 9 May 2019 08:43:11 -0500 (CDT) Subject: Re: ZFS... References: <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <26B407D8-3EED-47CA-81F6-A706CF424567@gromit.dlib.vt.edu> <42ba468a-2f87-453c-0c54-32edc98e83b8@sorbs.net> <4A485B46-1C3F-4EE0-8193-ADEB88F322E8@gromit.dlib.vt.edu> <14ed4197-7af7-f049-2834-1ae6aa3b2ae3@sorbs.net> <453BCBAC-A992-4E7D-B2F8-959B5C33510E@gromit.dlib.vt.edu> <92330c95-7348-c5a2-9c13-f4cbc99bc649@sorbs.net> <20190509002809.GA81574@neutralgood.org> From: Karl Denninger Openpgp: preference=signencrypt Autocrypt: addr=karl@denninger.net; prefer-encrypt=mutual; keydata= mQINBFIX1zsBEADRcJfsQUl9oFeoMfLPJ1kql+3sIaYx0MfJAUhV9LnbWxr0fsWCskM1O4cV tHm5dqPkuPM4Ztc0jLotD1i9ubWvCHOlkLGxFOL+pFbjA+XZ7VKsC/xWmhMwJ3cM8HavK2OV SzEWQ/AEYtMi04IzGSwsxh/5/5R0mPHrsIomV5SbuiI0vjLuDj7fo6146AABI1ULzge4hBYW i/SHrqUrLORmUNBs6bxek79/B0Dzk5cIktD3LOfbT9EAa5J/osVkstMBhToJgQttaMIGv8SG CzpR/HwEokE+7DP+k2mLHnLj6H3kfugOF9pJH8Za4yFmw//s9cPXV8WwtZ2SKfVzn1unpKqf wmJ1PwJoom/d4fGvQDkgkGKRa6RGC6tPmXnqnx+YX4iCOdFfbP8L9rmk2sewDDVzHDU3I3ZZ 8hFIjMYM/QXXYszRatK0LCV0QPZuF7LCf4uQVKw1/oyJInsnH7+6a3c0h21x+CmSja9QJ+y0 yzgEN/nM89d6YTakfR+1xkYgodVmMy/bS8kmXbUUZG/CyeqCqc95RUySjKT2ECrf9GhhoQkl +D8n2MsrAUSMGB4GQSN+TIq9OBTpNuvATGSRuF9wnQcs1iSry+JNCpfRTyWp83uCNApe6oHU EET4Et6KDO3AvjvBMAX0TInTRGW2SQlJMuFKpc7Dg7tHK8zzqQARAQABtCNLYXJsIERlbm5p bmdlciA8a2FybEBkZW5uaW5nZXIubmV0PokCPAQTAQIAJgUCUhfXOwIbIwUJCWYBgAYLCQgH AwIEFQIIAwQWAgMBAh4BAheAAAoJEG6/sivc5s0PLxQP/i6x/QFx9G4Cw7C+LthhLXIm7NSH AtNbz2UjySEx2qkoQQjtsK6mcpEEaky4ky6t8gz0/SifIfJmSmyAx0UhUQ0WBv1vAXwtNrQQ jJd9Bj6l4c2083WaXyHPjt2u2Na6YFowyb4SaQb83hu/Zs25vkPQYJVVE0JX409MFVPUa6E3 zFbd1OTr3T4yNUy4gNeQZfzDqDS8slbIks2sXeoJrZ6qqXVI0ionoivOlaN4T6Q0UYyXtigj dQvvhMt0aNowKFjRqrmSDRpdz+o6yg7Mp7qEZ1V6EZk8KqQTH6htpCTQ8i79ttK4LG6bstSF Re6Fwq52nbrcANrcdmtZXqjo+SGbUqJ8b1ggrxAsJ5MEhRh2peKrCgI/TjQo+ZxfnqEoR4AI 46Cyiz+/lcVvlvmf2iPifS3EEdaH3Itfwt7MxFm6mQORYs6skHDw3tOYB2/AdCW6eRVYs2hB RMAG4uwApZfZDKgRoE95PJmQjeTBiGmRPcsQZtNESe7I7EjHtCDLwtJqvD4HkDDQwpzreT6W XkyIJ7ns7zDfA1E+AQhFR6rsTFGgQZRZKsVeov3SbhYKkCnVDCvb/PKQCAGkSZM9SvYG5Yax 8CMry3AefKktf9fqBFg8pWqtVxDwJr56dhi0GHXRu3jVI995rMGo1fLUG5fSxiZ8L5sAtokh 9WFmQpyl To: freebsd-stable@freebsd.org Message-ID: <8fc532d3-19cb-2612-d83d-602eabe18bff@denninger.net> Date: Thu, 9 May 2019 08:43:11 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190509002809.GA81574@neutralgood.org> Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-512; boundary="------------ms080003080001050301060006" X-Rspamd-Queue-Id: 16B858FDC8 X-Spamd-Bar: ------ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-6.17 / 15.00]; RCVD_VIA_SMTP_AUTH(0.00)[]; HAS_ATTACHMENT(0.00)[]; TO_DN_NONE(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; MX_GOOD(-0.01)[cached: px.denninger.net]; MIME_BASE64_TEXT(0.10)[]; SUBJ_ALL_CAPS(0.45)[6]; NEURAL_HAM_SHORT(-0.96)[-0.964,0]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; R_DKIM_NA(0.00)[]; ASN(0.00)[asn:14061, ipnet:104.236.64.0/18, country:US]; MIME_TRACE(0.00)[0:+,1:+,2:+]; MID_RHS_MATCH_FROM(0.00)[]; RECEIVED_SPAMHAUS_PBL(0.00)[197.57.1.68.zen.spamhaus.org : 127.0.0.11]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; FROM_HAS_DN(0.00)[]; SIGNED_SMIME(-2.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.20)[multipart/signed,multipart/alternative,text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-stable@freebsd.org]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; IP_SCORE(-2.55)[ip: (-9.88), ipnet: 104.236.64.0/18(-4.21), asn: 14061(1.41), country: US(-0.06)]; DMARC_NA(0.00)[denninger.net]; R_SPF_NA(0.00)[] X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 May 2019 13:43:15 -0000 This is a cryptographically signed message in MIME format. --------------ms080003080001050301060006 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 T24gNS84LzIwMTkgMTk6MjgsIEtldmluIFAuIE5lYWwgd3JvdGU6DQo+IE9uIFdlZCwgTWF5 IDA4LCAyMDE5IGF0IDExOjI4OjU3QU0gLTA1MDAsIEthcmwgRGVubmluZ2VyIHdyb3RlOg0K Pj4gSWYgeW91IGhhdmUgcG9vbChzKSB0aGF0IGFyZSB0YWtpbmcgKnR3byB3ZWVrcyogdG8g cnVuIGEgc2NydWIgSU1ITw0KPj4gZWl0aGVyIHNvbWV0aGluZyBpcyBiYWRseSB3cm9uZyBv ciB5b3UgbmVlZCB0byByZXRoaW5rIG9yZ2FuaXphdGlvbiBvZg0KPj4gdGhlIHBvb2wgc3Ry dWN0dXJlIC0tIHRoYXQgaXMsIElNSE8geW91IGxpa2VseSBlaXRoZXIgaGF2ZSBhIHNldmVy ZQ0KPj4gcGVyZm9ybWFuY2UgcHJvYmxlbSB3aXRoIG9uZSBvciBtb3JlIG1lbWJlcnMgb3Ig YW4gYXJjaGl0ZWN0dXJhbCBwcm9ibGVtDQo+PiB5b3UgKnJlYWxseSogbmVlZCB0byBkZXRl cm1pbmUgYW5kIGZpeC7CoCBJZiBhIHNjcnViIHRha2VzIHR3byB3ZWVrcw0KPj4gKnRoZW4g YSByZXNpbHZlciBjb3VsZCBjb25jZWl2YWJseSB0YWtlIHRoYXQgbG9uZyBhcyB3ZWxsKiBh bmQgdGhhdCdzDQo+PiAqZXh0cmVtZWx5KiBiYWQgYXMgdGhlIHdpbmRvdyBmb3IgZ2V0dGlu ZyBzY3Jld2VkIGlzIGF0IGl0cyB3b3JzdCB3aGVuIGENCj4+IHJlc2lsdmVyIGlzIGJlaW5n IHJ1bi4NCj4gV291bGRuJ3QgaGF2aW5nIG11bHRpcGxlIHZkZXZzIG1pdGlnYXRlIHRoZSBp c3N1ZSBmb3IgcmVzaWx2ZXJzIChidXQgbm90DQo+IHNjcnVicyk/IE15IHVuZGVyc3RhbmRp bmcsIHBsZWFzZSBjb3JyZWN0IG1lIGlmIEknbSB3cm9uZywgaXMgdGhhdCBhDQo+IHJlc2ls dmVyIG9ubHkgcmVhZHMgdGhlIHN1cnZpdmluZyBkcml2ZXMgaW4gdGhhdCBzcGVjaWZpYyB2 ZGV2Lg0KDQpZZXMuDQoNCkluIGFkZGl0aW9uIHdoaWxlICJtb3N0LW1vZGVybiIgcmV2aXNp b25zIGhhdmUgbWF0ZXJpYWwgaW1wcm92ZW1lbnRzDQoodmVyeSBtdWNoIHNvKSBpbiBzY3J1 YiB0aW1lcyAib3V0IG9mIHRoZSBib3giIGEgYml0IG9mIHR1bmluZyBtYWtlcyBmb3INCnZl cnkgbWF0ZXJpYWwgZGlmZmVyZW5jZXMgaW4gb2xkZXIgcmV2aXNpb25zLsKgIFNwZWNpZmlj YWxseSBtYXhpbmZsaWdodA0KY2FuIGJlIGEgYmlnIGRlYWwgZ2l2ZW4gYSByZWFzb25hYmxl IGFtb3VudCBvZiBSQU0gKGUuZy4gMTYgb3IgMzJHYikgYXMNCmFyZSBhc3luY193cml0ZV9t aW5fYWN0aXZlIChyYWlzZSBpdCB0byAiMiI7IHlvdSBtYXkgZ2V0IGEgYml0IG1vcmUgd2l0 aA0KIjMiLCBidXQgbm90IGEgbG90KQ0KDQpJIGhhdmUgYSBzY3J1YiBydW5uaW5nIHJpZ2h0 IG5vdyBhbmQgdGhpcyBpcyB3aGF0IGl0IGxvb2tzIGxpa2U6DQoNCkRpc2tzwqDCoCBkYTLC oMKgIGRhM8KgwqAgZGE0wqDCoCBkYTXCoMKgIGRhOMKgwqAgZGE5wqAgZGExMMKgwqANCktC L3TCoCAxMC40MCAxMS4wM8KgwqAgMTAzwqDCoCAxMDjCoMKgIDEyMiA5OC4xMSA5OC40OA0K dHBzwqDCoMKgwqDCoCA0NsKgwqDCoCA0NcKgIDEyNTTCoCAxMjA1wqAgMTA2MsKgIDEzMjTC oCAxMzE5DQpNQi9zwqDCoCAwLjQ2wqAgMC40OMKgwqAgMTI3wqDCoCAxMjfCoMKgIDEyN8Kg wqAgMTI3wqDCoCAxMjcNCiVidXN5wqDCoMKgwqAgMMKgwqDCoMKgIDDCoMKgwqAgNDjCoMKg wqAgNjLCoMKgwqAgOTfCoMKgwqAgMjjCoMKgwqAgMzENCg0KSGVyZSdzIHRoZSBjdXJyZW50 IHN0YXQgb24gdGhhdCBwb29sOg0KDQrCoCBwb29sOiB6cw0KwqBzdGF0ZTogT05MSU5FDQrC oCBzY2FuOiBzY3J1YiBpbiBwcm9ncmVzcyBzaW5jZSBUaHUgTWF5wqAgOSAwMzoxMDowMCAy MDE5DQrCoMKgwqDCoMKgwqDCoCAxMS45VCBzY2FubmVkIGF0IDY0M00vcywgMTEuMFQgaXNz dWVkIGF0IDU5M00vcywgMTIuOFQgdG90YWwNCsKgwqDCoMKgwqDCoMKgIDAgcmVwYWlyZWQs IDg1LjU4JSBkb25lLCAwIGRheXMgMDA6NTQ6MjkgdG8gZ28NCmNvbmZpZzoNCg0KwqDCoMKg wqDCoMKgwqAgTkFNRcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgU1RBVEXCoMKgwqDC oCBSRUFEIFdSSVRFIENLU1VNDQrCoMKgwqDCoMKgwqDCoCB6c8KgwqDCoMKgwqDCoMKgwqDC oMKgwqDCoMKgwqDCoMKgIE9OTElORcKgwqDCoMKgwqDCoCAwwqDCoMKgwqAgMMKgwqDCoMKg IDANCsKgwqDCoMKgwqDCoMKgwqDCoCByYWlkejItMMKgwqDCoMKgwqDCoMKgwqAgT05MSU5F wqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDCoMKgwqAgMA0KwqDCoMKgwqDCoMKgwqDCoMKg wqDCoCBncHQvcnVzdDEuZWxpwqAgT05MSU5FwqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDC oMKgwqAgMA0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBncHQvcnVzdDIuZWxpwqAgT05MSU5F wqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDCoMKgwqAgMA0KwqDCoMKgwqDCoMKgwqDCoMKg wqDCoCBncHQvcnVzdDMuZWxpwqAgT05MSU5FwqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDC oMKgwqAgMA0KwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBncHQvcnVzdDQuZWxpwqAgT05MSU5F wqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDCoMKgwqAgMA0KwqDCoMKgwqDCoMKgwqDCoMKg wqDCoCBncHQvcnVzdDUuZWxpwqAgT05MSU5FwqDCoMKgwqDCoMKgIDDCoMKgwqDCoCAwwqDC oMKgwqAgMA0KDQplcnJvcnM6IE5vIGtub3duIGRhdGEgZXJyb3JzDQoNCkluZGVlZCBpdCB3 aWxsIGJlIGRvbmUgaW4gYWJvdXQgYW4gaG91cjsgdGhpcyBpcyBhbiAiYXV0b21hdGljIiBr aWNrZWQNCm9mZiBvdXQgb2YgcGVyaW9kaWMuwqAgSXQncyBjb21wcmlzZWQgb2YgNFRiIGRp c2tzIGFuZCBpcyBhYm91dCA3MCUNCm9jY3VwaWVkLsKgIFdoZW4gSSBnZXQgc29tZXdoZXJl IGFyb3VuZCBhbm90aGVyIDUtMTAlIEknbGwgc3dhcCBpbiA2VGINCmRyaXZlcyBmb3IgdGhl IDRUYiBvbmVzIGFuZCBzd2FwIGluIDhUYiAicHJpbWFyeSIgYmFja3VwIGRpc2tzIGZvciB0 aGUNCmV4aXN0aW5nIDZUYiBvbmVzLg0KDQpUaGlzIHBhcnRpY3VsYXIgbWFjaGluZSBoYXMg YSBzcGlubmluZyBydXN0IHBvb2wgKHdoaWNoIGlzIHRoaXMgb25lKSBhbmQNCmFub3RoZXIg dGhhdCdzIGNvbXByaXNlZCBvZiAyNDBHYiBJbnRlbCA3MzAgU1NEcyAoZmFpcmx5IG9sZCBh cyBTU0RzIGdvDQpidXQgbXVjaCBmYXN0ZXIgdGhhbiBzcGlubmluZyBydXN0IGFuZCB0aGV5 IGhhdmUgcG93ZXIgcHJvdGVjdGlvbiB3aGljaA0KSU1ITyBpcyB1dHRlcmx5IG1hbmRhdG9y eSBmb3IgU1NEcyBpbiBhbnkgZW52aXJvbm1lbnQgd2hlcmUgeW91IGFjdHVhbGx5DQpjYXJl IGFib3V0IHRoZSBkYXRhIGJlaW5nIHRoZXJlIGFmdGVyIGEgZm9yY2VkLCB1bmV4cGVjdGVk IHBsdWctcHVsbC4pwqANClRoaXMgbWFjaGluZSBpcyBVUFMtYmFja2VkIHdpdGggYXBjdXBz ZCBtb25pdG9yaW5nIGl0IHNvICppbiB0aGVvcnkqIGl0DQpzaG91bGQgbmV2ZXIgaGF2ZSBh biB1bnNvbGljaXRlZCBwb3dlciBmYWlsdXJlIHdpdGhvdXQgbm90aWNlIGJ1dCAiY3JhcA0K aGFwcGVucyI7IGEgZmV3IHllYXJzIGFnbyB0aGVyZSB3YXMgYW4gdW5kZXRlY3RlZCBmYXVs dCBpbiBvbmUgb2YgdGhlDQpiYXR0ZXJpZXMgKHRoZSBVUFMgZGlkbid0IGtub3cgYWJvdXQg aXQgZGVzcGl0ZSBpdCBiZWluZyBwcm9ncmFtbWVkIHRvDQpkbyBhdXRvbWF0ZWQgc2VsZi10 ZXN0cyBhbmQgaGFkbid0IHJlcG9ydGVkIHRoZSBmYXVsdCksIHBvd2VyIGdsaXRjaGVkDQph bmQgYmxhbW1vIC0tIGRvd24gaXQgd2VudCwgbm8gd2FybmluZy4NCg0KTXkgY3VycmVudCAi Y29uc2lkZXIgdGhvc2UiIFNTRHMgZm9yIHNpbWlsYXIgcmVwbGFjZW1lbnQgb3Igc2l6ZQ0K dXBncmFkZXMgd291bGQgbGlrZWx5IGJlIHRoZSBNaWNyb24gdW5pdHMgLS0gbm90IHRoZSBm YXN0ZXN0IG91dCB0aGVyZQ0KYnV0IHBsZW50eSBmYXN0LCByZWFzb25hYmx5IHByaWNlZCwg YXZhaWxhYmxlIGluIHNldmVyYWwgZGlmZmVyZW50DQp2ZXJzaW9ucyBkZXBlbmRpbmcgb24g d3JpdGUgZW5kdXJhbmNlIGFuZCBwb3dlci1wcm90ZWN0ZWQuDQoNCi0tIA0KS2FybCBEZW5u aW5nZXINCmthcmxAZGVubmluZ2VyLm5ldCA8bWFpbHRvOmthcmxAZGVubmluZ2VyLm5ldD4N Ci9UaGUgTWFya2V0IFRpY2tlci8NCi9bUy9NSU1FIGVuY3J5cHRlZCBlbWFpbCBwcmVmZXJy ZWRdLw0K --------------ms080003080001050301060006 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC DdgwggagMIIEiKADAgECAhMA5EiKghDOXrvfxYxjITXYDdhIMA0GCSqGSIb3DQEBCwUAMIGL MQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJTmljZXZpbGxlMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExITAf BgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQTAeFw0xNzA4MTcxNjQyMTdaFw0yNzA4 MTUxNjQyMTdaMHsxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkwFwYDVQQKDBBD dWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5c3RlbXMgQ0ExJTAjBgNVBAMMHEN1 ZGEgU3lzdGVtcyBMTEMgMjAxNyBJbnQgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK AoICAQC1aJotNUI+W4jP7xQDO8L/b4XiF4Rss9O0B+3vMH7Njk85fZ052QhZpMVlpaaO+sCI KqG3oNEbuOHzJB/NDJFnqh7ijBwhdWutdsq23Ux6TvxgakyMPpT6TRNEJzcBVQA0kpby1DVD 0EKSK/FrWWBiFmSxg7qUfmIq/mMzgE6epHktyRM3OGq3dbRdOUgfumWrqHXOrdJz06xE9NzY vc9toqZnd79FUtE/nSZVm1VS3Grq7RKV65onvX3QOW4W1ldEHwggaZxgWGNiR/D4eosAGFxn uYeWlKEC70c99Mp1giWux+7ur6hc2E+AaTGh+fGeijO5q40OGd+dNMgK8Es0nDRw81lRcl24 SWUEky9y8DArgIFlRd6d3ZYwgc1DMTWkTavx3ZpASp5TWih6yI8ACwboTvlUYeooMsPtNa9E 6UQ1nt7VEi5syjxnDltbEFoLYcXBcqhRhFETJe9CdenItAHAtOya3w5+fmC2j/xJz29og1KH YqWHlo3Kswi9G77an+zh6nWkMuHs+03DU8DaOEWzZEav3lVD4u76bKRDTbhh0bMAk4eXriGL h4MUoX3Imfcr6JoyheVrAdHDL/BixbMH1UUspeRuqQMQ5b2T6pabXP0oOB4FqldWiDgJBGRd zWLgCYG8wPGJGYgHibl5rFiI5Ix3FQncipc6SdUzOQIDAQABo4IBCjCCAQYwHQYDVR0OBBYE FF3AXsKnjdPND5+bxVECGKtc047PMIHABgNVHSMEgbgwgbWAFBu1oRhUMNEzjODolDka5k4Q EDBioYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UEBwwJ TmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRhIFN5 c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYIJAKxAy1WBo2kY MBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IC AQCB5686UCBVIT52jO3sz9pKuhxuC2npi8ZvoBwt/IH9piPA15/CGF1XeXUdu2qmhOjHkVLN gO7XB1G8CuluxofOIUce0aZGyB+vZ1ylHXlMeB0R82f5dz3/T7RQso55Y2Vog2Zb7PYTC5B9 oNy3ylsnNLzanYlcW3AAfzZcbxYuAdnuq0Im3EpGm8DoItUcf1pDezugKm/yKtNtY6sDyENj tExZ377cYA3IdIwqn1Mh4OAT/Rmh8au2rZAo0+bMYBy9C11Ex0hQ8zWcvPZBDn4v4RtO8g+K uQZQcJnO09LJNtw94W3d2mj4a7XrsKMnZKvm6W9BJIQ4Nmht4wXAtPQ1xA+QpxPTmsGAU0Cv HmqVC7XC3qxFhaOrD2dsvOAK6Sn3MEpH/YrfYCX7a7cz5zW3DsJQ6o3pYfnnQz+hnwLlz4MK 17NIA0WOdAF9IbtQqarf44+PEyUbKtz1r0KGeGLs+VGdd2FLA0e7yuzxJDYcaBTVwqaHhU2/ Fna/jGU7BhrKHtJbb/XlLeFJ24yvuiYKpYWQSSyZu1R/gvZjHeGb344jGBsZdCDrdxtQQcVA 6OxsMAPSUPMrlg9LWELEEYnVulQJerWxpUecGH92O06wwmPgykkz//UmmgjVSh7ErNvL0lUY UMfunYVO/O5hwhW+P4gviCXzBFeTtDZH259O7TCCBzAwggUYoAMCAQICEwCg0WvVwekjGFiO 62SckFwepz0wDQYJKoZIhvcNAQELBQAwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3Jp ZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBD QTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExMQyAyMDE3IEludCBDQTAeFw0xNzA4MTcyMTIx MjBaFw0yMjA4MTYyMTIxMjBaMFcxCzAJBgNVBAYTAlVTMRAwDgYDVQQIDAdGbG9yaWRhMRkw FwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRswGQYDVQQDDBJrYXJsQGRlbm5pbmdlci5uZXQw ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC+HVSyxVtJhy3Ohs+PAGRuO//Dha9A 16l5FPATr6wude9zjX5f2lrkRyU8vhCXTZW7WbvWZKpcZ8r0dtZmiK9uF58Ec6hhvfkxJzbg 96WHBw5Fumd5ahZzuCJDtCAWW8R7/KN+zwzQf1+B3MVLmbaXAFBuKzySKhKMcHbK3/wjUYTg y+3UK6v2SBrowvkUBC+jxNg3Wy12GsTXcUS/8FYIXgVVPgfZZrbJJb5HWOQpvvhILpPCD3xs YJFNKEPltXKWHT7Qtc2HNqikgNwj8oqOb+PeZGMiWapsatKm8mxuOOGOEBhAoTVTwUHlMNTg 6QUCJtuWFCK38qOCyk9Haj+86lUU8RG6FkRXWgMbNQm1mWREQhw3axgGLSntjjnznJr5vsvX SYR6c+XKLd5KQZcS6LL8FHYNjqVKHBYM+hDnrTZMqa20JLAF1YagutDiMRURU23iWS7bA9tM cXcqkclTSDtFtxahRifXRI7Epq2GSKuEXe/1Tfb5CE8QsbCpGsfSwv2tZ/SpqVG08MdRiXxN 5tmZiQWo15IyWoeKOXl/hKxA9KPuDHngXX022b1ly+5ZOZbxBAZZMod4y4b4FiRUhRI97r9l CxsP/EPHuuTIZ82BYhrhbtab8HuRo2ofne2TfAWY2BlA7ExM8XShMd9bRPZrNTokPQPUCWCg CdIATQIDAQABo4IBzzCCAcswPAYIKwYBBQUHAQEEMDAuMCwGCCsGAQUFBzABhiBodHRwOi8v b2NzcC5jdWRhc3lzdGVtcy5uZXQ6ODg4ODAJBgNVHRMEAjAAMBEGCWCGSAGG+EIBAQQEAwIF oDAOBgNVHQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMDMGCWCG SAGG+EIBDQQmFiRPcGVuU1NMIEdlbmVyYXRlZCBDbGllbnQgQ2VydGlmaWNhdGUwHQYDVR0O BBYEFLElmNWeVgsBPe7O8NiBzjvjYnpRMIHKBgNVHSMEgcIwgb+AFF3AXsKnjdPND5+bxVEC GKtc047PoYGRpIGOMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHRmxvcmlkYTESMBAGA1UE BwwJTmljZXZpbGxlMRkwFwYDVQQKDBBDdWRhIFN5c3RlbXMgTExDMRgwFgYDVQQLDA9DdWRh IFN5c3RlbXMgQ0ExITAfBgNVBAMMGEN1ZGEgU3lzdGVtcyBMTEMgMjAxNyBDQYITAORIioIQ zl6738WMYyE12A3YSDAdBgNVHREEFjAUgRJrYXJsQGRlbm5pbmdlci5uZXQwDQYJKoZIhvcN AQELBQADggIBAJXboPFBMLMtaiUt4KEtJCXlHO/3ZzIUIw/eobWFMdhe7M4+0u3te0sr77QR dcPKR0UeHffvpth2Mb3h28WfN0FmJmLwJk+pOx4u6uO3O0E1jNXoKh8fVcL4KU79oEQyYkbu 2HwbXBU9HbldPOOZDnPLi0whi/sbFHdyd4/w/NmnPgzAsQNZ2BYT9uBNr+jZw4SsluQzXG1X lFL/qCBoi1N2mqKPIepfGYF6drbr1RnXEJJsuD+NILLooTNf7PMgHPZ4VSWQXLNeFfygoOOK FiO0qfxPKpDMA+FHa8yNjAJZAgdJX5Mm1kbqipvb+r/H1UAmrzGMbhmf1gConsT5f8KU4n3Q IM2sOpTQe7BoVKlQM/fpQi6aBzu67M1iF1WtODpa5QUPvj1etaK+R3eYBzi4DIbCIWst8MdA 1+fEeKJFvMEZQONpkCwrJ+tJEuGQmjoQZgK1HeloepF0WDcviiho5FlgtAij+iBPtwMuuLiL shAXA5afMX1hYM4l11JXntle12EQFP1r6wOUkpOdxceCcMVDEJBBCHW2ZmdEaXgAm1VU+fnQ qS/wNw/S0X3RJT1qjr5uVlp2Y0auG/eG0jy6TT0KzTJeR9tLSDXprYkN2l/Qf7/nT6Q03qyE QnnKiBXWAZXveafyU/zYa7t3PTWFQGgWoC4w6XqgPo4KV44OMYIFBzCCBQMCAQEwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBglghkgBZQMEAgMFAKCCAkUw GAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTkwNTA5MTM0MzEx WjBPBgkqhkiG9w0BCQQxQgRAZoYBeyNNYg7a3VMef6uRNNbU/xHRtLl7WjJZf8NB8Ec6q27g ADTm8YsoC6Os2e7Ewq1wNunyF/y3In4sUvCSPzBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFl AwQBKjALBglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3 DQMCAgFAMAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIGjBgkrBgEEAYI3EAQxgZUwgZIwezEL MAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lzdGVtcyBM TEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0ZW1zIExM QyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTCBpQYLKoZIhvcNAQkQAgsxgZWg gZIwezELMAkGA1UEBhMCVVMxEDAOBgNVBAgMB0Zsb3JpZGExGTAXBgNVBAoMEEN1ZGEgU3lz dGVtcyBMTEMxGDAWBgNVBAsMD0N1ZGEgU3lzdGVtcyBDQTElMCMGA1UEAwwcQ3VkYSBTeXN0 ZW1zIExMQyAyMDE3IEludCBDQQITAKDRa9XB6SMYWI7rZJyQXB6nPTANBgkqhkiG9w0BAQEF AASCAgBFtbY0MrXoa+vITMjpjEbGyAuBJyyJHw2Cp9inLkSXXLVEH6IaFNOMMaJKei3SERnb /z+tncw2GAAE7H1TVIj0JHsmFDCdJYhyLT2/2DZwz0TpOgAi3bTslIRSJseZy4qB1WyD1dc7 TsEt+eOkRTsjXxdOuG8BqyW8cl9Dx4EiQ8N5U5csBGe87i7OqNVOUXktp7bQ3TKKSZCaXG4u c3yBgBx8b2BE9KSVlT0rvXNvxYXsgHmb6mGtRMU0OIor1hdnd0F1pxM3fgRTRmgnm4Bbhujy Z+H5T1SONAnQ0rjAQ6+tamsZmNKlVfKLwacqCFPFQLAHA7vfjtKyfKfVVnKWrYdb1CWfLBCe 1o+0qBWeczFancjAPkEt7Lcumg4OdJk7nlZ2NQ9YTcqKXMS26GwdzfZPvoOIkDeBBnIoEJJI yz30u7wVYjq5hOQRE11Kdb694JCqNMPbAaKj2EkCae+UjA2tkQ9fPu477uw2i5J3yNOAIQ3a UrL0JC/MBuano22+NnowzaZRq4TcT4N7qmmwup+T3bwdklQRoGaLiVqqOR9T/IPPe8IHr3QY L0ykyhJzHNaKlKFy0IOmBdEEBCYHAbkSgaoXE+HQYErXvg0T/vF7Lz4oooSzhvHlfRIFLi9r WeM7FzISJ01vGwS9GJVivaLFHo2y3lORJwnSTzq6GAAAAAAAAA== --------------ms080003080001050301060006-- From owner-freebsd-stable@freebsd.org Fri May 10 00:57:22 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E05711592ADC for ; Fri, 10 May 2019 00:57:21 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 5BD068959B; Fri, 10 May 2019 00:57:19 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR900J7IKMR1100@hades.sorbs.net>; Thu, 09 May 2019 18:11:18 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Fri, 10 May 2019 10:57:13 +1000 Cc: Miroslav Lachman <000.fbsd@quip.cz>, FreeBSD-STABLE Mailing List , Dimitry Andric Content-transfer-encoding: quoted-printable Message-id: <40021BE2-965B-4B37-969F-5391D5B931F7@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip> To: Alan Somers X-Rspamd-Queue-Id: 5BD068959B X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-2.03 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.68)[-0.677,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-0.996,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; NEURAL_HAM_SHORT(-0.29)[-0.288,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.31)[ip: (-0.76), ipnet: 72.12.192.0/19(-0.40), asn: 11114(-0.31), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 00:57:22 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 22:50, Alan Somers wrote: >=20 >> On Thu, May 9, 2019 at 5:37 AM Miroslav Lachman <000.fbsd@quip.cz> wrote:= >>=20 >> Dimitry Andric wrote on 2019/05/09 13:02: >>> On 9 May 2019, at 10:32, Miroslav Lachman <000.fbsd@quip.cz> wrote: >>=20 >> [...] >>=20 >>>> Disks are OK, monitored by smartmontools. There is nothing odd, just th= e long long scrubs. This machine was started with 4x 1TB (now 4x 4TB) and sc= rub was slow with 1TB disks too. This machine - HP ML110 G8) was my first ma= chine with ZFS. If I remember it well it was FreeBSD 7.0, now running 11.2. S= crub was / is always about one week. (I tried some sysctl tuning without muc= h gain) >>>=20 >>> Unfortunately https://svnweb.freebsd.org/changeset/base/339034, which >>> greatly speeds up scrubs and resilvers, was not in 11.2 (since it was >>> cut at r334458). >>>=20 >>> If you could update to a more recent snapshot, or try the upcoming 11.3 >>> prereleases, you will hopefully see much shorter scrub times. >>=20 >> Thank you. I will try 11-STABLE / 11.3-PRERELEASE soon and let you know >> about the difference. >>=20 >> Kind regards >> Miroslav Lachman >=20 > On 11.3 and even much older releases, you can greatly speed up scrub > and resilver by tweaking some sysctls. If you have spinning rust, > raise vfs.zfs.top_maxinflight so they'll do fewer seeks. I used to > set it to 8192 on machines with 32GB of RAM. Raising > vfs.zfs.resilver_min_time_ms to 5000 helps a little, too. I tried this, but I found that whilst it could speed up the resilver (and sc= rubs) by as much as 25% it also had a performance hit were reads (particular= ly streaming video - which is what my server mostly did) would =E2=80=9Cpaus= e=E2=80=9D and =E2=80=9Cstutter=E2=80=9D .. the balance came when I brought i= t back to around 200 * 15) . It would still stutter when running multiple streams (to the point of heavy l= oad) but that was kinda expected... I tried to keep the load distributed off= it and let the other front end servers stream and it seemed to result in a h= ealthy balance. >=20 > -Alan > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Fri May 10 03:55:15 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 66800159611A for ; Fri, 10 May 2019 03:55:15 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 67A208E1B3 for ; Fri, 10 May 2019 03:55:14 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Received: from [10.10.0.230] (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PR900J6ISVB1110@hades.sorbs.net> for freebsd-stable@freebsd.org; Thu, 09 May 2019 21:09:14 -0700 (PDT) Sun-Java-System-SMTP-Warning: Lines longer than SMTP allows found and truncated. Subject: Re: ZFS... From: Michelle Sullivan X-Mailer: iPad Mail (16A404) In-reply-to: Date: Fri, 10 May 2019 13:55:09 +1000 Cc: FreeBSD-STABLE Mailing List Content-transfer-encoding: quoted-printable Message-id: <6B39E185-CAF7-4B7C-990E-BC56F37F3940@sorbs.net> References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> To: Bob Bishop X-Rspamd-Queue-Id: 67A208E1B3 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.92 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.88)[-0.885,0]; FROM_HAS_DN(0.00)[]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-1.00)[-0.999,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; NEURAL_SPAM_SHORT(0.02)[0.024,0]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: battlestar.sorbs.net]; RCPT_COUNT_TWO(0.00)[2]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.30)[ip: (-0.75), ipnet: 72.12.192.0/19(-0.39), asn: 11114(-0.31), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 03:55:15 -0000 Michelle Sullivan http://www.mhix.org/ Sent from my iPad > On 09 May 2019, at 21:27, Bob Bishop wrote: >=20 >=20 >> On 9 May 2019, at 12:17, Michelle Sullivan wrote: >>=20 >>=20 >>=20 >> Michelle Sullivan >> http://www.mhix.org/ >> Sent from my iPad >>=20 >>> On 09 May 2019, at 17:46, Patrick M. Hausen wrote: >>>=20 >>> Hi all, >>>=20 >>>> Am 09.05.2019 um 00:55 schrieb Michelle Sullivan : >>>> No, one disk in the 16 disk zRAID2 ... previously unseen but it could b= e the errors have occurred in the last 6 weeks... everytime I reboot it star= ted resilvering, gets to 761M resilvered and then stops. >>>=20 >>> 16 disks in *one* RAIDZ2 vdev? That might be the cause of your insanely >>> long scrubs. In general it is not recommended though I cannot find the >>> source for that information quickly just now. >>=20 >> I have seen posts on various lists stating don=E2=80=99t go over 8.. I k= now people in Oracle, the word is it should matter... who do you believe? >=20 > Inter alia it depends on the quality/bandwidth of disk controllers. Interestingly, just got windows 7 installed on a usb stick with the windows b= ased zfs recovery tool... now scrubs and resilvers report around 70MB/s on a= ll versions of FreeBSD I have tried (9.3 thru 13-CURRENT), indeed even on my= own version with the Broadcom native SAS driver replacing the FreeBSD one..= . Results are immediately different.. it *says* it=E2=80=99s using 1.6/1.7 co= res, ~2G Ram and getting a solid 384MBps (yes B not b) with 100% disk io....= that=E2=80=99s a massive difference. This is using the windows 7 (sp1) built in driver... I can only guess that h= as to be pci bus handling differences or the throughput report is wrong. (Note =E2=80=9Csolid=E2=80=9D it is fluctuating between 381 and 386, but 97%= (ish - Ie a guess) of the time at 384) >=20 >> Michelle >>=20 >>>=20 >>> Kind regards, >>> Patrick >>> --=20 >>> punkt.de GmbH Internet - Dienstleistungen - Beratung >>> Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 >>> 76133 Karlsruhe info@punkt.de http://punkt.de >>> AG Mannheim 108285 Gf: Juergen Egeling >>>=20 >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"= >=20 >=20 > -- > Bob Bishop t: +44 (0)118 940 1243 > rb@gid.co.uk m: +44 (0)783 626 4518 >=20 >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-stable@freebsd.org Fri May 10 10:01:33 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E2C6D159E186 for ; Fri, 10 May 2019 10:01:32 +0000 (UTC) (envelope-from SRS0=AA0G=TK=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (elsa.codelab.cz [94.124.105.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 60B2069DCB; Fri, 10 May 2019 10:01:32 +0000 (UTC) (envelope-from SRS0=AA0G=TK=quip.cz=000.fbsd@elsa.codelab.cz) Received: from elsa.codelab.cz (localhost [127.0.0.1]) by elsa.codelab.cz (Postfix) with ESMTP id 0A5F028411; Fri, 10 May 2019 12:01:21 +0200 (CEST) Received: from illbsd.quip.test (ip-62-24-92-232.net.upcbroadband.cz [62.24.92.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by elsa.codelab.cz (Postfix) with ESMTPSA id 29FB928417; Fri, 10 May 2019 12:01:20 +0200 (CEST) Subject: Re: ZFS... To: Alan Somers Cc: FreeBSD-STABLE Mailing List , Dimitry Andric References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> From: Miroslav Lachman <000.fbsd@quip.cz> Message-ID: <8e443083-1254-520b-014d-2f9a94008533@quip.cz> Date: Fri, 10 May 2019 12:01:22 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:52.0) Gecko/20100101 Firefox/52.0 SeaMonkey/2.49.3 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 60B2069DCB X-Spamd-Bar: +++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [5.62 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_SPAM_SHORT(1.00)[0.996,0]; MIME_GOOD(-0.10)[text/plain]; RCVD_TLS_LAST(0.00)[]; DMARC_NA(0.00)[quip.cz]; AUTH_NA(1.00)[]; NEURAL_SPAM_MEDIUM(1.00)[0.998,0]; RCVD_COUNT_THREE(0.00)[3]; IP_SCORE(0.99)[ip: (0.67), ipnet: 94.124.104.0/21(0.34), asn: 42000(3.86), country: CZ(0.08)]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: elsa.codelab.cz]; NEURAL_SPAM_LONG(1.00)[1.000,0]; RCVD_IN_DNSWL_NONE(0.00)[4.105.124.94.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; R_SPF_NA(0.00)[]; FORGED_SENDER(0.30)[000.fbsd@quip.cz,SRS0=AA0G=TK=quip.cz=000.fbsd@elsa.codelab.cz]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:42000, ipnet:94.124.104.0/21, country:CZ]; FROM_NEQ_ENVFROM(0.00)[000.fbsd@quip.cz,SRS0=AA0G=TK=quip.cz=000.fbsd@elsa.codelab.cz]; MID_RHS_MATCH_FROM(0.00)[] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 10:01:33 -0000 Alan Somers wrote on 2019/05/09 14:50: [...] > On 11.3 and even much older releases, you can greatly speed up scrub > and resilver by tweaking some sysctls. If you have spinning rust, > raise vfs.zfs.top_maxinflight so they'll do fewer seeks. I used to > set it to 8192 on machines with 32GB of RAM. Raising > vfs.zfs.resilver_min_time_ms to 5000 helps a little, too. I have this in sysctl.conf vfs.zfs.scrub_delay=0 vfs.zfs.top_maxinflight=128 vfs.zfs.resilver_min_time_ms=5000 vfs.zfs.resilver_delay=0 I found it somewhere in the mailinglist discussing this issue in the past. Isn't yours 8192 too much? The machine in question has 4x SATA drives on very dump and slow controller and only 5GB of RAM. Even if I read this vfs.zfs.top_maxinflight: Maximum I/Os per top-level vdev I am still not sure what it really means and how I can "calculate" optimal value. As Michelle pointed there is drawback when sysctls are optimized for quick scrub, but this machines is only running nightly backup script fetching data from other 20 machines so this scrip sets sysctl back to sane defaults during backup sysctl vfs.zfs.scrub_delay=4 > /dev/null sysctl vfs.zfs.top_maxinflight=32 > /dev/null sysctl vfs.zfs.resilver_min_time_ms=3000 > /dev/null sysctl vfs.zfs.resilver_delay=2 > /dev/null At the and it reloads back optimized settings from sysctel.conf Miroslav Lachman From owner-freebsd-stable@freebsd.org Fri May 10 14:53:50 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 73B7F15A4974 for ; Fri, 10 May 2019 14:53:50 +0000 (UTC) (envelope-from michelle@sorbs.net) Received: from hades.sorbs.net (hades.sorbs.net [72.12.213.40]) by mx1.freebsd.org (Postfix) with ESMTP id 73A1F73B48; Fri, 10 May 2019 14:53:47 +0000 (UTC) (envelope-from michelle@sorbs.net) MIME-version: 1.0 Content-transfer-encoding: 7BIT Content-type: text/plain; CHARSET=US-ASCII; format=flowed Received: from isux.com (gate.mhix.org [203.206.128.220]) by hades.sorbs.net (Oracle Communications Messaging Server 7.0.5.29.0 64bit (built Jul 9 2013)) with ESMTPSA id <0PRA00JLQNCR1140@hades.sorbs.net>; Fri, 10 May 2019 08:07:41 -0700 (PDT) Subject: Re: ZFS... To: Miroslav Lachman <000.fbsd@quip.cz>, Alan Somers Cc: Dimitry Andric , FreeBSD-STABLE Mailing List References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> <8e443083-1254-520b-014d-2f9a94008533@quip.cz> From: Michelle Sullivan Message-id: <85f44376-32eb-885e-1eb8-d3a2e204bc84@sorbs.net> Date: Sat, 11 May 2019 00:53:38 +1000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0 SeaMonkey/2.48 In-reply-to: <8e443083-1254-520b-014d-2f9a94008533@quip.cz> X-Rspamd-Queue-Id: 73A1F73B48 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of michelle@sorbs.net designates 72.12.213.40 as permitted sender) smtp.mailfrom=michelle@sorbs.net X-Spamd-Result: default: False [-1.89 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.84)[-0.840,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[4]; R_SPF_ALLOW(-0.20)[+a:hades.sorbs.net]; NEURAL_HAM_LONG(-0.99)[-0.993,0]; MIME_GOOD(-0.10)[text/plain]; DMARC_NA(0.00)[sorbs.net]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[battlestar.sorbs.net,anaconda.sorbs.net,ninja.sorbs.net,catapilla.sorbs.net,scorpion.sorbs.net,desperado.sorbs.net]; NEURAL_HAM_SHORT(-0.50)[-0.500,0]; RCVD_IN_DNSWL_NONE(0.00)[40.213.12.72.list.dnswl.org : 127.0.10.0]; SUBJ_ALL_CAPS(0.45)[6]; IP_SCORE(-0.30)[ip: (-0.74), ipnet: 72.12.192.0/19(-0.39), asn: 11114(-0.30), country: US(-0.06)]; RCVD_NO_TLS_LAST(0.10)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:11114, ipnet:72.12.192.0/19, country:US]; MID_RHS_MATCH_FROM(0.00)[]; CTE_CASE(0.50)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 14:53:50 -0000 Miroslav Lachman wrote: > Alan Somers wrote on 2019/05/09 14:50: > > [...] > >> On 11.3 and even much older releases, you can greatly speed up scrub >> and resilver by tweaking some sysctls. If you have spinning rust, >> raise vfs.zfs.top_maxinflight so they'll do fewer seeks. I used to >> set it to 8192 on machines with 32GB of RAM. Raising >> vfs.zfs.resilver_min_time_ms to 5000 helps a little, too. > > I have this in sysctl.conf > vfs.zfs.scrub_delay=0 > vfs.zfs.top_maxinflight=128 > vfs.zfs.resilver_min_time_ms=5000 > vfs.zfs.resilver_delay=0 > > I found it somewhere in the mailinglist discussing this issue in the > past. > > Isn't yours 8192 too much? The machine in question has 4x SATA drives > on very dump and slow controller and only 5GB of RAM. > > Even if I read this > vfs.zfs.top_maxinflight: Maximum I/Os per top-level vdev > I am still not sure what it really means and how I can "calculate" > optimal value. I calculated it by looking at the iops using gstat and then multiplied it by the spindles in use. That seemed to give the optimum. Much lower and the drives had idle time.... to much and it causes 'pauses' whilst the writes happen. "Tuning" for me had no fine tuning, it seems very sledgehammerish ... big changes are noticeable for better or worse.. small changes you cannot tell by eye, and I guess measuring known operations (such as a controlled environment scrub) might show differing results but I suspect with other things going on these negligible changes are likely to be useless. > > As Michelle pointed there is drawback when sysctls are optimized for > quick scrub, but this machines is only running nightly backup script > fetching data from other 20 machines so this scrip sets sysctl back to > sane defaults during backup not really.. optimize for scrub should only affect the system whilst the scrub and resilvers are actually happening,.. the rest of the time my systems were not affected (noticeably) my problem was a scrub would kick off and last a couple of weeks (1 week heavily preferrencing the scrub) causing the video streaming to become stuttery .. which watching a movie whilst this was happening was really not good... especially as it wasn't for just a few seconds/minutes/hours. > sysctl vfs.zfs.scrub_delay=4 > /dev/null > sysctl vfs.zfs.top_maxinflight=32 > /dev/null > sysctl vfs.zfs.resilver_min_time_ms=3000 > /dev/null > sysctl vfs.zfs.resilver_delay=2 > /dev/null > > At the and it reloads back optimized settings from sysctel.conf > > Miroslav Lachman > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" -- Michelle Sullivan http://www.mhix.org/ From owner-freebsd-stable@freebsd.org Fri May 10 15:17:36 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BE15915A5062 for ; Fri, 10 May 2019 15:17:36 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D41B674BAD; Fri, 10 May 2019 15:17:35 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-lj1-f169.google.com with SMTP id 188so5404354ljf.9; Fri, 10 May 2019 08:17:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=XzmcA0laxnxhPbyb/E6kPO2rbJTe9FxVMn8tUXBbi7U=; b=ZA5axeAiaM8tUKCZwp5AOxxDeqgQbFrRqsTe2v2Kx0+D8WNbFn6wfMZhiOjTCIVIGv aMFA6HnKh5BOq41r0LiXg2/hXNwN0HlgOaL/UtyB9yBB1dqnIdLfJ4F463u7Pz8CciWr M4cVVMA3OFvQZbrB6cysdpqKDthZ9qzsyBFU9kiu3e8ByW+LY0Nl+KtDoy2Hgu+r6iE2 xoD1LZ2Atv14k0Mo023Y6wJkccSQgm6Z6iIbhrjvrQN6k60arPfuaUz1DSAfbn97KzlM CtjzGluEl86agc52VQSEmP/mabscSHWFq1f2h0N7L+rh6oFP23hFQ8ERKKay0kG1fNL2 yjxQ== X-Gm-Message-State: APjAAAVNVftx/WV7mSbo5HH1CfvnRx1LQV4YHXp3mBGuHIPuh8082Vz7 Ao98NDzLglv/ptK93H9Jn3FS23aOGP5MotKZRyGJouNN X-Google-Smtp-Source: APXvYqxipBvn6AZusTZDdXYEYhW4E3FbITDpDxza9g93XA+/eiC4kF3A/nJUIF8a6OsX6lY9svD4BCNQTXUFUpUD8eA= X-Received: by 2002:a2e:655b:: with SMTP id z88mr2882067ljb.108.1557501454143; Fri, 10 May 2019 08:17:34 -0700 (PDT) MIME-Version: 1.0 References: <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net> <805ee7f1-83f6-c59e-8107-4851ca9fce6e@quip.cz> <5de7f3d3-b34c-0382-b7d4-b7e38339649b@quip.cz> <8e443083-1254-520b-014d-2f9a94008533@quip.cz> In-Reply-To: <8e443083-1254-520b-014d-2f9a94008533@quip.cz> From: Alan Somers Date: Fri, 10 May 2019 09:17:22 -0600 Message-ID: Subject: Re: ZFS... To: Miroslav Lachman <000.fbsd@quip.cz> Cc: FreeBSD-STABLE Mailing List , Dimitry Andric Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: D41B674BAD X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of asomers@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=asomers@gmail.com X-Spamd-Result: default: False [-3.52 / 15.00]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; TO_DN_ALL(0.00)[]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; NEURAL_HAM_SHORT(-0.70)[-0.698,0]; SUBJ_ALL_CAPS(0.45)[6]; FORGED_SENDER(0.30)[asomers@freebsd.org,asomers@gmail.com]; R_DKIM_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; FROM_NEQ_ENVFROM(0.00)[asomers@freebsd.org,asomers@gmail.com]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.968,0]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; DMARC_NA(0.00)[freebsd.org]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[169.208.85.209.list.dnswl.org : 127.0.5.0]; RCVD_TLS_LAST(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; IP_SCORE(-1.30)[ip: (-0.50), ipnet: 209.85.128.0/17(-3.66), asn: 15169(-2.27), country: US(-0.06)] X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 15:17:37 -0000 On Fri, May 10, 2019 at 4:01 AM Miroslav Lachman <000.fbsd@quip.cz> wrote: > > Alan Somers wrote on 2019/05/09 14:50: > > [...] > > > On 11.3 and even much older releases, you can greatly speed up scrub > > and resilver by tweaking some sysctls. If you have spinning rust, > > raise vfs.zfs.top_maxinflight so they'll do fewer seeks. I used to > > set it to 8192 on machines with 32GB of RAM. Raising > > vfs.zfs.resilver_min_time_ms to 5000 helps a little, too. > > I have this in sysctl.conf > vfs.zfs.scrub_delay=0 > vfs.zfs.top_maxinflight=128 > vfs.zfs.resilver_min_time_ms=5000 > vfs.zfs.resilver_delay=0 > > I found it somewhere in the mailinglist discussing this issue in the past. > > Isn't yours 8192 too much? The machine in question has 4x SATA drives on > very dump and slow controller and only 5GB of RAM. I chose 8192 for a machine with 32GB of RAM, dozens of disks, and where resilver speed was more important than responsiveness. RAM usage does increase dramatically as you raise that sysctl, but so does resilver speed. Even top_maxinflight=8192 didn't max out the resilver speed. Higher values produced still higher resilver speeds, but used too much RAM. The reason that resilver speed depends on top_maxinflight is because ZFS issues the I/Os in object order, not LBA order. But vdev_queue will reorder them into LBA order. Allowing more I/Os to be in flight gives vdev_queue more power to reorder things. -Alan > > Even if I read this > vfs.zfs.top_maxinflight: Maximum I/Os per top-level vdev > I am still not sure what it really means and how I can "calculate" > optimal value. > > As Michelle pointed there is drawback when sysctls are optimized for > quick scrub, but this machines is only running nightly backup script > fetching data from other 20 machines so this scrip sets sysctl back to > sane defaults during backup > sysctl vfs.zfs.scrub_delay=4 > /dev/null > sysctl vfs.zfs.top_maxinflight=32 > /dev/null > sysctl vfs.zfs.resilver_min_time_ms=3000 > /dev/null > sysctl vfs.zfs.resilver_delay=2 > /dev/null > > At the and it reloads back optimized settings from sysctel.conf > > Miroslav Lachman From owner-freebsd-stable@freebsd.org Fri May 10 20:14:02 2019 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D72D8158527A for ; Fri, 10 May 2019 20:14:01 +0000 (UTC) (envelope-from lwhsu.freebsd@gmail.com) Received: from mail-yw1-f68.google.com (mail-yw1-f68.google.com [209.85.161.68]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EBBA889B56 for ; Fri, 10 May 2019 20:14:00 +0000 (UTC) (envelope-from lwhsu.freebsd@gmail.com) Received: by mail-yw1-f68.google.com with SMTP id n188so5719877ywe.2 for ; Fri, 10 May 2019 13:14:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=G55DG69bk1XLugjjW91KJOaUIOuRZNTwMCh13tpmqos=; b=FPC45hZvAMY/87reiCQbeIq0DB+ONJvdbIgaCpAuZSYntr7wv9H8HMEJkVBlmfKFcV 0dkx2kphtiroTcmY8kz4ZOXQ0pavP6RRO7W6dED+ujJHIVok/l279ytAIAPmuR0qR8PI VzbrpujEA3boHfz/3hDOSZySfQQaBAoC0R2rNemvWBMzS0cAwaTViHFQrr8HDgd6uymj 4CN7J3Mt/UGqiLMiInUq67pAQ5eM7PkA+VuoH9TcYyBoDratACL5uQzidJUa/MS0+/GR EUlXVWcSPypf3Yx1l4LsayRfhJTxs28uJVjY2Fa+1TFkxoUrptpswz+Zi4a9XcFUbR3Z IaUQ== X-Gm-Message-State: APjAAAVT9Yploic8rH9Ok/hmWQEbIdgQynLe6SKjWWNrggGayS66za0k DjuHPMkD98V+ST3IXxcZk14T5m3uDdlSdOLbpcQ= X-Google-Smtp-Source: APXvYqyrFudW6uOVmJGUjBq60Vl5RBbGs4IlLOGLDwfmsTCDbkKeWz90ejeW/tKWRSB7pOU0+RLQxZZTiY8zq4HtmYs= X-Received: by 2002:a25:1542:: with SMTP id 63mr7058738ybv.6.1557517275869; Fri, 10 May 2019 12:41:15 -0700 (PDT) MIME-Version: 1.0 From: Li-Wen Hsu Date: Fri, 10 May 2019 15:41:04 -0400 Message-ID: Subject: FreeBSD CI Weekly Report 2019-05-05 To: freebsd-testing@freebsd.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: EBBA889B56 X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; spf=pass (mx1.freebsd.org: domain of lwhsufreebsd@gmail.com designates 209.85.161.68 as permitted sender) smtp.mailfrom=lwhsufreebsd@gmail.com X-Spamd-Result: default: False [-4.12 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-0.99)[-0.993,0]; FORGED_RECIPIENTS(0.00)[freebsd-testing@freebsd.org,freebsd-stable@freebsd.org]; FROM_HAS_DN(0.00)[]; RWL_MAILSPIKE_GOOD(0.00)[68.161.85.209.rep.mailspike.net : 127.0.0.18]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; MIME_GOOD(-0.10)[text/plain]; MIME_TRACE(0.00)[0:+]; TO_DN_NONE(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; RCPT_COUNT_ONE(0.00)[1]; RCVD_TLS_LAST(0.00)[]; MX_GOOD(-0.01)[cached: alt3.gmail-smtp-in.l.google.com]; NEURAL_HAM_SHORT(-0.92)[-0.924,0]; DMARC_NA(0.00)[freebsd.org]; RCVD_IN_DNSWL_NONE(0.00)[68.161.85.209.list.dnswl.org : 127.0.5.0]; IP_SCORE(-1.20)[ipnet: 209.85.128.0/17(-3.65), asn: 15169(-2.27), country: US(-0.06)]; FORGED_SENDER(0.30)[lwhsu@freebsd.org,lwhsufreebsd@gmail.com]; R_DKIM_NA(0.00)[]; FREEMAIL_ENVFROM(0.00)[gmail.com]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; TAGGED_FROM(0.00)[]; FROM_NEQ_ENVFROM(0.00)[lwhsu@freebsd.org,lwhsufreebsd@gmail.com]; TO_DOM_EQ_FROM_DOM(0.00)[] X-Mailman-Approved-At: Fri, 10 May 2019 21:01:24 +0000 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2019 20:14:02 -0000 (bcc -current and -stable for more audience) FreeBSD CI Weekly Report 2019-05-05 =================================== Here is a summary of the FreeBSD Continuous Integration results for the period from 2019-04-29 to 2019-05-05. During this period, we have: * 2372 builds (99.9% passed, 0.1% failed) were executed on aarch64, amd64, armv6, armv7, i386, mips, mips64, powerpc, powerpc64, powerpcspe, riscv64, sparc64 architectures for head, stable/12, stable/11 branches. * 384 test runs (53.9% passed, 44.5% unstable, 1.6% exception) were executed on amd64, i386, riscv64 architectures for head, stable/12, stable/11 branches. * 20 doc buils (100% passed) (The statistics from experimental jobs are omitted) If any of the issues found by CI are in your area of interest or expertise please investigate the PRs listed below. The latest web version of this report is available at https://hackmd.io/s/B13k-VEoN and archive is available at http://hackfoldr.org/freebsd-ci-report/, any help is welcome. ## Fixed Tests * https://ci.freebsd.org/job/FreeBSD-stable-12-i386-test/ * sys.kern.coredump_phnum_test.coredump_phnum https://svnweb.freebsd.org/changeset/base/346909 * lib.libc.sys.sendfile_test.fd_positive_shm_v4 * lib.libc.sys.sendfile_test.hdtr_negative_bad_pointers_v4 https://svnweb.freebsd.org/changeset/base/346912 * https://ci.freebsd.org/job/FreeBSD-stable-11-i386-test/ * lib.libc.sys.sendfile_test.fd_positive_shm_v4 * lib.libc.sys.sendfile_test.hdtr_negative_bad_pointers_v4 https://svnweb.freebsd.org/changeset/base/346911 ## Failing Tests * https://ci.freebsd.org/job/FreeBSD-head-i386-test/ * sys.opencrypto.runtests.main * sys.netpfil.pf.forward.v6 * sys.netpfil.pf.forward.v4 * sys.netpfil.pf.set_tos.v4 * https://ci.freebsd.org/job/FreeBSD-stable-12-i386-test/ * sys.netpfil.pf.forward.v6 * sys.netpfil.pf.forward.v4 * sys.netpfil.pf.set_tos.v4 * lib.libc.regex.exhaust_test.regcomp_too_big * lib.libregex.exhaust_test.regcomp_too_big * https://ci.freebsd.org/job/FreeBSD-stable-11-i386-test/ * usr.bin.procstat.procstat_test.kernel_stacks * local.kyua.* (31 cases) * local.lutok.* (3 cases) ## Failing Tests (from experimental jobs) * https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/ There are ~60 failing cases, including flakey ones, see https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/lastCompletedBuild/testReport/ for more details ## Disabled Tests * lib.libc.sys.mmap_test.mmap_truncate_signal https://bugs.freebsd.org/211924 * sys.fs.tmpfs.mount_test.large https://bugs.freebsd.org/212862 * sys.fs.tmpfs.link_test.kqueue https://bugs.freebsd.org/213662 * sys.kqueue.libkqueue.kqueue_test.main https://bugs.freebsd.org/233586 * usr.bin.procstat.procstat_test.command_line_arguments https://bugs.freebsd.org/233587 * usr.bin.procstat.procstat_test.environment https://bugs.freebsd.org/233588 ## New Issues * https://bugs.freebsd.org/237641 Flakey test case: common.misc.t_dtrace_contrib.tst_dynopt_d * https://bugs.freebsd.org/237652 tests.hotspare.hotspare_test.hotspare_snapshot_001_pos timeout since somewhere in (r346814, r 346845] * https://bugs.freebsd.org/237655 Non-deterministic panic when running pf tests in interface ioctl code (NULL passed to strncmp) * https://bugs.freebsd.org/237656 "Freed UMA keg (rtentry) was not empty (18 items). Lost 1 pages of memory." seen when running sys/netipsec tests * https://bugs.freebsd.org/237657 sys.kern.pdeathsig.signal_delivered_ptrace timing out periodically on i386 ## Oepn Issues * https://bugs.freebsd.org/237077 possible race in build: /usr/src/sys/amd64/linux/linux_support.s:38:2: error: expected relocatable expression * https://bugs.freebsd.org/237403 Tests in sys/opencrypto should be converted to Python3 ### Cause build fails * [233735: Possible build race: genoffset.o /usr/src/sys/sys/types.h: error: machine/endian.h: No such file or directory](https://bugs.freebsd.org/233735) * [233769: Possible build race: ld: error: unable to find library -lgcc_s](https://bugs.freebsd.org/233769) ### Others [Tickets related to testing@](https://preview.tinyurl.com/y9maauwg)