From owner-freebsd-questions@freebsd.org Tue Jan 31 00:52:02 2017 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AC844CC7C63 for ; Tue, 31 Jan 2017 00:52:02 +0000 (UTC) (envelope-from wblock@wonkity.com) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 719D67E1 for ; Tue, 31 Jan 2017 00:52:02 +0000 (UTC) (envelope-from wblock@wonkity.com) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.15.2/8.15.2) with ESMTPS id v0V0q1Sn032793 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 30 Jan 2017 17:52:01 -0700 (MST) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.15.2/8.15.2/Submit) with ESMTP id v0V0q1Wp032790; Mon, 30 Jan 2017 17:52:01 -0700 (MST) (envelope-from wblock@wonkity.com) Date: Mon, 30 Jan 2017 17:52:01 -0700 (MST) From: Warren Block To: David Christensen cc: freebsd-questions@freebsd.org Subject: Re: FreeBSD 11.0-RELEASE-p7 i386 system drive imaging and migration In-Reply-To: <455f87f9-3f1d-dc68-ac1d-8248a7e0f043@holgerdanske.com> Message-ID: References: <86bmupg0gi.fsf@WorkBox.homestead.org> <2973d1ea-202f-60fa-2930-eec05b626cfb@holgerdanske.com> <455f87f9-3f1d-dc68-ac1d-8248a7e0f043@holgerdanske.com> User-Agent: Alpine 2.20 (BSF 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (wonkity.com [127.0.0.1]); Mon, 30 Jan 2017 17:52:01 -0700 (MST) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 31 Jan 2017 00:52:02 -0000 On Mon, 30 Jan 2017, David Christensen wrote: > On 01/30/17 07:28, Warren Block wrote: >> On Sun, 29 Jan 2017, David Christensen wrote: >> >>>> Writing SSDs with dd is not good, limiting their wear leveling. >>> >>> That's why I used zcat rather than dd for writing to the cloned SSD. >>> If/when I know enough to use zfs send/ receive, that will be best. >> >> zcat is no different than dd in this case. When you write a binary >> image, the SSD can't tell which blocks are truly in use, because they >> have all been written. > > Taking the image with 'dd' will grab all blocks -- in-use, used, never used > (zero-freed and available for writing). On restoration, it all gets written. > Yes, it's wasteful. But it's 2+ steps I can do by hand off the top of my > head; rather than 18+ steps, most of which I've never done. > > > I used 'zcat' in the hope that many 512 byte blocks would be sent to the SSD > per system call, rather than 'dd' making one system call for each and every > 512 byte block. (I also experimented with 'bs=1M', but adding 'conv=sync' > resulted in a bad destination image.) Given the microcontroller and RAM > buffer in the SSD, it might not matter. The number of blocks written is the important part, and that is the same. Because every block in a disk or filesystem has been written, the SSD sees them as in use. If you overprovision by leaving part of the disk unpartitioned or with an unused partition, it would help. The only time to not give a larger buffer size to dd is when you want it to write odd multiples of 512 bytes. Otherwise, it always does better with at least bs=32K, and for flash it might as well be bs=1m. If it errors out, there is a problem with the source file or, more likely, the destination device. I don't generally use conv=sync. If it has an error, I don't want it to try to skip over it, but just fail.