Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 05 Dec 2017 07:41:25 -0800
From:      Paul Vixie <paul@redbarn.org>
To:        FreeBSD virtualization <freebsd-virtualization@freebsd.org>
Subject:   Re: Storage overhead on zvols
Message-ID:  <5A26BE25.10409@redbarn.org>
In-Reply-To: <32BA4687-AB70-4370-A9BA-EF4F66BF69A6@ebureau.com>
References:  <CC62E200-A749-4406-AC56-2FC7A104D353@ebureau.com> <CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og@mail.gmail.com> <423F466A-732A-4B04-956E-3CC5F5C47390@ebureau.com> <5A26B9C8.7020005@redbarn.org> <32BA4687-AB70-4370-A9BA-EF4F66BF69A6@ebureau.com>

next in thread | previous in thread | raw e-mail | index | archive | help


Dustin Wenz wrote:
> I'm not using ZFS in my VMs for data integrity (the host already
> provides that); it's mainly for the easy creation and management of
> filesystems, and the ability to do snapshots for rollback and
> replication.

snapshot and replication works fine on the host, acting on the zvol.

> Some of my deployments have hundreds of filesystems in
> an organized hierarchy, with delegated permissions and automated
> snapshots, send/recvs, and clones for various operations.

what kind of zpool do you use in the guest, to avoid unwanted additional 
redundancy?

did you benchmark the space or time efficiency of ZFS vs. UFS?

in some bsd related meeting this year i asked allan jude for a bhyve 
level null mount, so that we could access at / inside the guest some 
subtree of the host, and avoid block devices and file systems 
altogether. right now i have to use nfs for that, which is irritating.

-- 
P Vixie




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5A26BE25.10409>