From owner-freebsd-fs@freebsd.org Wed May 18 10:38:22 2016 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8925EB4026B for ; Wed, 18 May 2016 10:38:22 +0000 (UTC) (envelope-from girgen@pingpong.net) Received: from mail.pingpong.net (mail.pingpong.net [79.136.116.202]) by mx1.freebsd.org (Postfix) with ESMTP id 1D4231532 for ; Wed, 18 May 2016 10:38:21 +0000 (UTC) (envelope-from girgen@pingpong.net) Received: from [10.226.149.205] (80-254-69-13.dynamic.monzoon.net [80.254.69.13]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.pingpong.net (Postfix) with ESMTPSA id F199316C12; Wed, 18 May 2016 12:38:20 +0200 (CEST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (1.0) Subject: Re: Best practice for high availability ZFS pool From: Palle Girgensohn X-Mailer: iPhone Mail (13E238) In-Reply-To: <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org> Date: Wed, 18 May 2016 12:38:20 +0200 Cc: freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: References: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net> <5127A334-0805-46B8-9CD9-FD8585CB84F3@chittenden.org> To: Sean Chittenden X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 May 2016 10:38:22 -0000 > 18 maj 2016 kl. 09:58 skrev Sean Chittenden : >=20 > https://www.freebsdfoundation.org/wp-content/uploads/2015/12/vol2_no4_grou= pon.pdf >=20 > mps(4) was good to us. What=E2=80=99s your workload? -sc Have to check details for peaks but average is around 0.8 MByte/s. Not much.= It will grow.=20 >=20 > -- > Sean Chittenden > sean@chittenden.org >=20 >=20 >> On May 18, 2016, at 03:53 , Palle Girgensohn wrote:= >>=20 >>=20 >>=20 >>> 17 maj 2016 kl. 18:13 skrev Joe Love : >>>=20 >>>=20 >>>> On May 16, 2016, at 5:08 AM, Palle Girgensohn wrot= e: >>>>=20 >>>> Hi, >>>>=20 >>>> We need to set up a ZFS pool with redundance. The main goal is high ava= ilability - uptime. >>>>=20 >>>> I can see a few of paths to follow. >>>>=20 >>>> 1. HAST + ZFS >>>>=20 >>>> 2. Some sort of shared storage, two machines sharing a JBOD box. >>>>=20 >>>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive) >>>>=20 >>>> 4. using something else than ZFS, even a different OS if required. >>>>=20 >>>> My main concern with HAST+ZFS is performance. Google offer some insight= s here, I find mainly unsolved problems. Please share any success stories or= other experiences. >>>>=20 >>>> Shared storage still has a single point of failure, the JBOD box. Apart= from that, is there even any support for the kind of storage PCI cards that= support dual head for a storage box? I cannot find any. >>>>=20 >>>> We are running with ZFS replication today, but it is just too slow for t= he amount of data. >>>>=20 >>>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and= also tools, scripts, backup all is using ZFS, but if there is no solution u= sing ZFS, we're open to alternatives. Nexenta springs to mind, but I believe= it is using shared storage for redundance, so it does have single points of= failure? >>>>=20 >>>> Any other suggestions? Please share your experience. :) >>>>=20 >>>> Palle >>>=20 >>> I don=E2=80=99t know if this falls into the realm of what you want, but B= SDMag just released an issue with an article entitled =E2=80=9CAdding ZFS to= the FreeBSD dual-controller storage concept.=E2=80=9D >>> https://bsdmag.org/download/reusing_openbsd/ >>>=20 >>> My understanding in this setup is that the only single point of failure f= or this model is the backplanes that the drives would connect to. Depending= on your controller cards, this could be alleviated by simply using multiple= drive shelves, and only using one drive/shelf as part of a vdev (then strip= e or whatnot over your vdevs). >>>=20 >>> It might not be what you=E2=80=99re after, as it=E2=80=99s basically two= systems with their own controllers, with a shared set of drives. Some expa= nsion from the virtual world to real physical systems will probably need add= itional variations. >>> I think the TrueNAS system (with HA) is setup similar to this, only with= out the split between the drives being primarily handled by separate control= lers, but someone with more in-depth knowledge would need to confirm/deny th= is. >>>=20 >>> -Jo >>=20 >> Hi, >>=20 >> Do you know any specific controllers that work with dual head? >>=20 >> Thanks., >> Palle >>=20 >>=20 >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >=20