Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 13 Feb 2010 02:51:27 +0100
From:      Philipp Wuensche <cryx-freebsd@h3q.com>
To:        Merijn Verstraaten <merijn@inconsistent.nl>
Cc:        Christer Solskogen <christer.solskogen@gmail.com>, freebsd-jail@freebsd.org
Subject:   Re: Fwd: Jailcfg - A new tool for creating small(!) jails
Message-ID:  <4B76059F.9010700@h3q.com>
In-Reply-To: <op.u71ke4f84534sa@twilight.fritz.box>
References:  <c1a0d1561002110733y575d0681t4feb917deabce531@mail.gmail.com> <c1a0d1561002112323h1902248bj7be343d4e1083687@mail.gmail.com> <alpine.BSF.2.00.1002120250310.61799@pragry.qngnvk.ybpny> <4B75F83E.4000400@h3q.com> <op.u71ke4f84534sa@twilight.fritz.box>

next in thread | previous in thread | raw e-mail | index | archive | help
Merijn Verstraaten wrote:
> On Sat, 13 Feb 2010 01:54:22 +0100, Philipp Wuensche
> <cryx-freebsd@h3q.com> wrote:
>>> The only data that is collected after that is user data which is a good
>>> thing with no extra cost of system mount points and disk usage.
>>
>> Thats only true until the first update of the freebsd-userland inside
>> the jail. The moment you need to update the freebsd-userland inside the
>> jail, it will use additional space and all the advantages of this idea
>> are gone.
> 
> This is true, but not much of a problem in practice.

As you already explained, this heavily depends on what your practice is!

If you are in full control of each and every jail you run, this is a
possible practice. If you run a shared server with lots of people
managing the installed ports in their jails on their own, this may get
complicated as you need to take into account different settings for
ports, configuration files in odd locations, userdata outside the nullfs
mount etc.pp.

This setup also requires you to restart every jail for even minor
userland updates or you need to start syncing those minor updates into
every jail. Can be automated, of course.

>> Using clone will also create a direct dependency between the snapshots
>> and the cloned filesystems. As long as the clone exists, the snapshot
>> has to be kept. This is only resolvable by using zfs send/recv which
>> will, again, use additional space.
> 
> I don't really see how the dependency is an issue. Could you perhaps
> explain how/why this matters?

In your setup this doesn't, as you nuke & pave and mount userdata via
nullfs, thats the keypoint here. But people tend to think a cloned
filesystem is independent from its snapshot and start to use it this way.

Common pitfall is using snapshot, clone and rollback:

% zfs create exports/zones/base
% zfs snapshot exports/zones/base@RELEASE-p1
# *updatemagic*
% zfs snapshot exports/zones/base@RELEASE-p2
% zfs clone exports/zones/base@RELEASE-p2 export/zones/jail
% zfs rollback exports/zones/base@RELEASE-p1
cannot rollback to 'exports/zones/base@RELEASE-p1': more recent
snapshots exist
use '-r' to force deletion of the following snapshots:
exports/zones/base@RELEASE-p2
% zfs rollback -r exports/zones/base@RELEASE-p1
cannot rollback to 'exports/zones/base@RELEASE-p1': clones of previous
snapshots exist
use '-R' to force deletion of the following clones and dependents:
export/zones/jail

If "export/zones/jail" includes userdata, yes in your setup it doesn't,
then you have a problem.

greetings,
philipp



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4B76059F.9010700>