Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 7 Aug 2014 10:33:09 +0200 (CEST)
From:      =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no>
To:        FreeBSD questions <freebsd-questions@freebsd.org>
Subject:   Re: some ZFS questions
Message-ID:  <alpine.BSF.2.11.1408071024190.64214@mail.fig.ol.no>
In-Reply-To: <201408070816.s778G9ug015988@sdf.org>
References:  <201408070816.s778G9ug015988@sdf.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 7 Aug 2014 03:16-0500, Scott Bennett wrote:

>      On Wed, 6 Aug 2014 03:49:37 -0500 Andrew Berg
> <aberg010@my.hennepintech.edu> wrote:
> >On 2014.08.06 02:32, Scott Bennett wrote:
> >>      I have a number of questions that I need answered before I go about
> >> setting up any raidz pools.  They are in no particular order.
> >> 
> >> 	1) What is the recommended method of using geli encryption with
> >> 	ZFS?
> >
> >> Does one first create .eli devices and then specify those
> >> 	.eli devices in the zpool(8) command as the devices to include
> >> 	in the pool? 
> >This.
> 
>      Oh.  Well, that's doable, if not terribly convenient, but it brings up
> another question.  After a reboot, for example, what does ZFS do while the
> array of .eli devices is being attached one by one?  Does it see the first
> one attached without the others in sight and decide it has a failed pool?
> >
> >> 	2) How does one start or stop a pool?
> >That depends on what you mean by 'start' and 'stop'. I am guessing by what you
> >described that you mean 'import' and 'export'. I'm not sure how to prevent
> 
>      Oh.  Okay.  At least there is some way to accomplish the same thing.
> 
> >automatic import for pools that were imported on the same system and not
> >exported prior to shutdown, but I am sure it can be done. Resilvering does not
> 
>      Maybe someone else will say how it can be done.
> 
> >mercilessly thrash disks; standard reads and writes are given higher priority
> >in the scheduler than resilver and scrub operations.
> 
>      If two pools use different partitions on a drive and both pools are
> rebuilding those partitions at the same time, then how could ZFS *not*
> be hammering the drive?

Why would you place multiple pools on the very same drives?

The only real world example I can think of is having separately 
mirrored boot and root pools using the same drives.

E.g., bootpool using ada0p2 and ada1p2 as its mirrors, and rootpool 
using adap0p3 and ada1p3 as its mirrors, with ada0p1 and ada1p1 being 
the gpt boot partitions, and possibly with swap partitions on ada0p4 
and ada1p4.

> The access arm would be doing almost nothing but endless series of 
> long seeks back and forth between the two partitions involved.  
> When you're talking about hundreds of gigabytes to be written to 
> each partition, it could take months or even years to complete, 
> during which time something else is almost certain to fail and halt 
> the rebuilds.
> >
> >> 	3) If a raidz2 or raidz3 loses more than one component, does one
> >> 	simply replace and rebuild all of them at once?  Or is it necessary
> >> 	to rebuild them serially?  In some particular order?
> >AFAIK, replacement of several disks can't be done in a single command, but I
> >don't think you need to wait for a resilver to finish on one before you can
> >replace another.
> 
>      That looks good.  What happens if a "zpool replace failingdrive newdrive"
> is running when the failingdrive actually fails completely?
> >
> >> 	5) When I upgrade to amd64, the usage would continue to be low-
> >> 	intensity as defined above.  Will the 4 GB be enough?  I will not
> >> 	be using the "deduplication" feature at all.
> >It will be enough unless you are managing tens of TB of data. I recommend
> >setting an ARC limit of 3GB or so. There is a patch that makes the ARC handle
> 
>      3 GB for ARC plus whatever is needed for FreeBSD would leave much room
> for applications to run.  Maybe I won't be able to use ZFS if it requires
> so vastly more page-fixed memory than UFS. :-(
> 
> >memory pressure more gracefully, but it's not committed yet. I highly recommend
> >moving to 64-bit as soon as possible.
> 
>      I intend to do so, but "as soon as possible" will be after all this
> disk trouble and disk reconfiguration have been resolved.  It will be done
> via an in-place upgrade from source, so I need to have a place to run
> buildworld and buildkernel.  Before doing an installkernel and installworld,
> I need also to have a place to run full backups.  I have not had a place to
> store new backups for the last three months, which is making me more unhappy
> by the day.  I really have to get the disk work *done* before I can move
> forward on anything else, which is why I'm trying to find out whether I can
> actually use ZFS raidzN in that cause while still on i386.  Performance
> will not be an issue that I can see until later if ever.  I just need to
> know whether I can use it at all with my presently installed OS or will
> instead have to use gvinum(8) raid5 and hope for minimal data corruption.
> (At least only one .eli device would be needed in that case, not the M+N
> .eli devices that would be required for a raidzN pool.) Unfortunately,
> ideal conditions for ZFS are not an available option for now.
>      Further, the real memory on the system will not change by converting
> to amd64, although at least the kernel should ignore somewhat less of that
> real memory than it does in i386.
> >
> >> 	6) I have a much fancier computer sitting unused that I intend to
> >> 	put into service fairly soon after getting my current disk and data
> >> 	situation resolved.  The drives that would be in use for raidz
> >> 	pools I would like to attach to that system when it is ready.  It
> >> 	also has 4 GB of memory, but would start out as an amd64 system and
> >> 	might well have another 2 GB or 4 GB added at some point(s), though
> >> 	not immediately.  What problems/pitfalls/precautions would I need
> >> 	to have in mind and be prepared for in order to move those drives
> >> 	from the current system to that newer one?
> >It should be pretty painless to move pools from system to system. Exporting a
> >pool from the old system is recommended before moving, but not necessary.
> 
>      One thing I ran across was the following from the zpool(8) man page.
> 
>   "For pools to be portable, you must give the zpool command whole
>   disks, not just slices, so that ZFS can label the disks with portable
>   EFI labels. Otherwise, disk drivers on platforms of different endian-
>   ness will not recognize the disks."
> 
> If I have one raidzN comprising .eli partitions and another raidzN comprising
> a set of unencrypted partitions on those same drives, will I be able to
> export both raidzN pools from a 9-STABLE system and then import them
> into, say, a 10-STABLE system on a different Intel amd64 machine?  By your
> answer to question 1), it would seem that I need to have two raidzN pools,
> although there might be a number of benefits to having both encrypted and
> unencrypted file systems allocated inside a single pool were that an option.
> Also, I would like to use 100 - 200 GB on each drive for other purposes that
> might well not involve ZFS, although there may be ways I could avoid putting
> those functions onto the raidzNs' drives.
>      Thanks for your answers.  I am beginning now to fill in some of the
> pieces to the puzzle, though that process still has some way to go.
> 
> 
>                                   Scott Bennett, Comm. ASMELG, CFIAG
> **********************************************************************
> * Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
> *--------------------------------------------------------------------*
> * "A well regulated and disciplined militia, is at all times a good  *
> * objection to the introduction of that bane of all free governments *
> * -- a standing army."                                               *
> *    -- Gov. John Hancock, New York Journal, 28 January 1790         *
> **********************************************************************

-- 
+-------------------------------+------------------------------------+
| Vennlig hilsen,               | Best regards,                      |
| Trond Endrestøl,              | Trond Endrestøl,                   |
| IT-ansvarlig,                 | System administrator,              |
| Fagskolen Innlandet,          | Gjøvik Technical College, Norway,  |
| tlf. mob.   952 62 567,       | Cellular...: +47 952 62 567,       |
| sentralbord 61 14 54 00.      | Switchboard: +47 61 14 54 00.      |
+-------------------------------+------------------------------------+



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.11.1408071024190.64214>