From owner-freebsd-questions@freebsd.org Thu Dec 12 05:11:36 2019 Return-Path: Delivered-To: freebsd-questions@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 82B2D1EC419 for ; Thu, 12 Dec 2019 05:11:36 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from holgerdanske.com (holgerdanske.com [IPv6:2001:470:0:19b::b869:801b]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "xray.he.net", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 47YMN71fM2z46hx for ; Thu, 12 Dec 2019 05:11:35 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from 99.100.19.101 ([99.100.19.101]) by holgerdanske.com with ESMTPSA (ECDHE-RSA-AES128-GCM-SHA256:TLSv1.2:Kx=ECDH:Au=RSA:Enc=AESGCM(128):Mac=AEAD) (SMTP-AUTH username dpchrist@holgerdanske.com, mechanism PLAIN) for ; Wed, 11 Dec 2019 21:11:22 -0800 Subject: Re: Adding to a zpool -- different redundancies and risks To: freebsd-questions@freebsd.org References: <6104097C-009B-4E9C-A1D8-A2D0E5FECADF@glasgow.ac.uk> From: David Christensen Message-ID: <09b11639-3303-df6b-f70c-6722caaacee7@holgerdanske.com> Date: Wed, 11 Dec 2019 21:11:18 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <6104097C-009B-4E9C-A1D8-A2D0E5FECADF@glasgow.ac.uk> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 47YMN71fM2z46hx X-Spamd-Bar: + Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=none (mx1.freebsd.org: domain of dpchrist@holgerdanske.com has no SPF policy when checking 2001:470:0:19b::b869:801b) smtp.mailfrom=dpchrist@holgerdanske.com X-Spamd-Result: default: False [1.01 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; NEURAL_HAM_MEDIUM(-0.97)[-0.972,0]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; IP_SCORE(-1.65)[ipnet: 2001:470::/32(-4.65), asn: 6939(-3.54), country: US(-0.05)]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; URIBL_RED(3.50)[ixsystems.com.multi.uribl.com]; RCPT_COUNT_ONE(0.00)[1]; AUTH_NA(1.00)[]; DMARC_NA(0.00)[holgerdanske.com]; NEURAL_HAM_LONG(-0.87)[-0.871,0]; HAS_ANON_DOMAIN(0.10)[]; R_SPF_NA(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:6939, ipnet:2001:470::/32, country:US]; MID_RHS_MATCH_FROM(0.00)[]; RCVD_TLS_ALL(0.00)[]; RCVD_COUNT_TWO(0.00)[2] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Dec 2019 05:11:36 -0000 On 2019-12-11 09:39, Norman Gray wrote: > > Greetings. > > I'd like to add a new VDEV to a pool, and I'm being warned (slightly to > my surprise) that there's a 'mismatched replication level'. I'm trying > to get a sense of how much of a risk I'd be running by forcing this with > -f. > > Context: > > * I currently have two raidz2 VDEVs composed of nine 5.5TB disks > (thus 2 x ~40TB available) > * I'd like to add another raidz2 VDEV composed of six 12TB disks > (thus adding a VDEV of ~48TB, roughly the same size as the other two) -- > this is what prompts the warning about replication level > * The storage is a local mirror which it would be very annoying to > lose, but it's not holding unique copies of anything > * I don't like using -f options unless I'm pretty damn confident I > know what's happening > > (I wouldn't set this up in quite this way from scratch, but this is an > old-ish server, and a small budget windfall has allowed me to max out > the remaining available slots with new disks). > > I can appreciate that the ideal planned setup would, in principle, be to > have all the VDEVs be symmetrical, in terms of size and number of disks. > Is a VDEV mix merely 'not ideal', or 'not great but you'll be fine', > or Bad? > > My mental model of what's going on suggests that, since the pool simply > stripes across the VDEVs, it doesn't have to care how the VDEVs > themselves are structured, so that a 9x5.5 raidz2 and a 6x12 raidz2 > would be roughly equally used, and I can't see why there would be a > performance or a utilisation difference between the two (but I still > count myself as a ZFS tyro). > > I can see that there would be a reliability issue if the various VDEVs > were different sizes of mirrored ones -- this would create different > amounts of resilience, and so the warning makes sense in a 'are you > sure?' way. If the VDEVs were different sizes and the pool was > mirroring over them, then there would obviously be a utilisation issue. > > Though both of [1] and [2] illustrate only mixing VDEVs of the same > type, [2] says merely that 'When using RAIDZ vdevs, it is also a good > idea to keep them at the same width and of the same type.' and > illustrates a 4-wide plus 8-wide raidz2 as 'not horrible'. The forum > post at [3] asks essentially the same question as this email, but > receives a rather oblique answer. The question at [4] gets a confident > answer which I don't _think_ makes complete sense. > > Since [1] and [2] are both more authoritative and match my own > understanding, I'm inclined to believe that adding this new VDEV would > be less than perfect, but reasonable. Am I deceiving myself? > > Thanks for any advice you can offer, > > Norman > > > [1] > https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/ > [2] > https://www.ixsystems.com/community/resources/introduction-to-zfs.111/ > [3] > https://forums.freebsd.org/threads/zfs-mismatched-repli-levels.28226/ > [4] https://serverfault.com/questions/522782/zfs-with-unsymmetric-vdevs > Please post: 1 The 'zpool create ...' command you used to create the existing pool. 2. The output of 'zpool status' for the existing pool. 3. The output of 'zpool list' for the existing pool. 4. The 'zpool add ...' command you are contemplating. So, you have 24 drives in a 24 drive cage? What are your space and performance goals? What are your sustainability goals as drives and/or VDEV's fail? David