From owner-freebsd-questions@freebsd.org Fri Jun 3 14:34:26 2016 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 803CCB68970 for ; Fri, 3 Jun 2016 14:34:26 +0000 (UTC) (envelope-from galtsev@kicp.uchicago.edu) Received: from cosmo.uchicago.edu (cosmo.uchicago.edu [128.135.70.90]) by mx1.freebsd.org (Postfix) with ESMTP id 46E3011CE for ; Fri, 3 Jun 2016 14:34:25 +0000 (UTC) (envelope-from galtsev@kicp.uchicago.edu) Received: by cosmo.uchicago.edu (Postfix, from userid 48) id 0F96CCB8CB5; Fri, 3 Jun 2016 09:34:25 -0500 (CDT) Received: from 128.135.52.6 (SquirrelMail authenticated user valeri) by cosmo.uchicago.edu with HTTP; Fri, 3 Jun 2016 09:34:24 -0500 (CDT) Message-ID: <61821.128.135.52.6.1464964464.squirrel@cosmo.uchicago.edu> In-Reply-To: <20160603115020.GN95511@mordor.lan> References: <20160603083843.GK95511@mordor.lan> <20160603104138.fdf3c0ac4be93769be6da401@sohara.org> <20160603101446.GM95511@mordor.lan> <20160603114746.6b75e6e79ecd51fe14311e40@sohara.org> <20160603115020.GN95511@mordor.lan> Date: Fri, 3 Jun 2016 09:34:24 -0500 (CDT) Subject: Re: redundant storage From: "Valeri Galtsev" To: "Julien Cigar" Cc: "Steve O'Hara-Smith" , freebsd-questions@freebsd.org Reply-To: galtsev@kicp.uchicago.edu User-Agent: SquirrelMail/1.4.8-5.el5.centos.7 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Jun 2016 14:34:26 -0000 On Fri, June 3, 2016 6:50 am, Julien Cigar wrote: > On Fri, Jun 03, 2016 at 11:47:46AM +0100, Steve O'Hara-Smith wrote: >> On Fri, 3 Jun 2016 12:14:46 +0200 >> Julien Cigar wrote: >> >> > On Fri, Jun 03, 2016 at 10:41:38AM +0100, Steve O'Hara-Smith wrote: >> > > Hi, >> > > >> > > Just one change - don't use RAID1 use ZFS mirrors. ZFS does >> > > better RAID than any hardware controller. >> > >> > right.. I must admit that I haven't looked at ZFS yet (I'm still using >> > UFS + gmirror), but it will be the opportunity to do so..! >> > >> > Does ZFS play well with HAST? >> >> Never tried it but it should work well enough, ZFS sits on top of >> geom providers so it should be possible to use the pool on the primary. >> >> One concern would be that since all reads come from local storage >> the secondary machine never gets scrubbed and silent corruption never >> gets >> detected on the secondary. A periodic (say weekly) switch over and scrub >> takes care of this concern. Silent corruption is rare, but the bigger >> the >> pool and the longer it's used the more likely it is to happen >> eventually, >> detection and repair of this is one of ZFSs advantages over hardware >> RAID >> so it's good not to defeat it. > > Thanks, I'll read a bit on ZFS this week-end ..! > > My ultimate goal would be that the HAST storage survives an hard reboot/ > unplugged network cable/... during an heavy I/O write, and that the > switch between the two nodes is transparent to the clients, without any > data loss of course ... feasible or utopian? Needless to say that what > I want to avoid at all cost is that the storage becomes corrupted and > unrecoverable..! Sounds pretty much like distributed file system solution. I tried one (moosefs) which I gave up on, and after I asked (on this list) for advise about other options, next candidate for me emerged: glusterfs, which I hadn't chance to set up yet. You may want to search this list archives, those were really good advises that experts gave me. Valeri > >> >> Drive failures on the primary will wind up causing both the primary >> and the secondary to be rewritten when the drive is replaced - this >> could >> probably be avoided by switching primaries and letting HAST deal with >> the >> replacement. >> >> Another very minor issue would be that any corrective rewrites (for >> detected corruption) will happen on both copies but that's harmless and >> there really should be *very* few of these. >> >> One final concern, but it's HAST purely and not really ZFS. Writing >> a large file flat out will likely saturate your LAN with half the >> capacity >> going to copying the data for HAST. A private backend link between the >> two >> boxes would be a good idea (or 10 gigabit ethernet). > > yep, that's what I had in mind..! one nic for the replication between > the two HAST node, and one (CARP) nic by which clients access to > storage.. > >> >> > > On Fri, 3 Jun 2016 10:38:43 +0200 >> > > Julien Cigar wrote: >> > > >> > > > Hello, >> > > > >> > > > I'm looking for a low-cost redundant HA storage solution for our >> > > > (small) team here (~30 people). It will be used to store files >> > > > generated by some webapps, to provide a redundant dovecot (imap) >> > > > server, etc. >> > > > >> > > > For the hardware I have to go with HP (no choice), so I planned to >> buy >> > > > 2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with >> > > > 4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer >> > > > (WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222 >> > > > controller, which is apparently supported by the ciss driver) >> > > > >> > > > On the FreeBSD side I plan to use HAST with CARP, and the volumes >> > > > will be exported through NFS4. >> > > > >> > > > Any comments on this setup (or other recommendations) ? :) >> > > > >> > > > Thanks! >> > > > Julien >> > > > >> > > >> > > >> > > -- >> > > Steve O'Hara-Smith >> > > >> > >> >> >> -- >> Steve O'Hara-Smith | Directable Mirror Arrays >> C:>WIN | A better way to focus the >> sun >> The computer obeys and wins. | licences available see >> You lose and Bill collects. | http://www.sohara.org/ >> > > -- > Julien Cigar > Belgian Biodiversity Platform (http://www.biodiversity.be) > PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 > No trees were killed in the creation of this message. > However, many electrons were terribly inconvenienced. > ++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++