Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Dec 2017 21:37:43 -0600
From:      Adam Vande More <amvandemore@gmail.com>
To:        Dustin Wenz <dustinwenz@ebureau.com>
Cc:        FreeBSD virtualization <freebsd-virtualization@freebsd.org>
Subject:   Re: Storage overhead on zvols
Message-ID:  <CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og@mail.gmail.com>
In-Reply-To: <CC62E200-A749-4406-AC56-2FC7A104D353@ebureau.com>
References:  <CC62E200-A749-4406-AC56-2FC7A104D353@ebureau.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz <dustinwenz@ebureau.com> wrote:

> I'm starting a new thread based on the previous discussion in "bhyve uses
> all available memory during IO-intensive operations" relating to size
> inflation of bhyve data stored on zvols. I've done some experimenting with
> this, and I think it will be useful for others.
>
> The zvols listed here were created with this command:
>
>         zfs create -o volmode=dev -o volblocksize=Xk -V 30g
> vm00/chyves/guests/myguest/diskY
>
> The zvols were created on a raidz1 pool of four disks. For each zvol, I
> created a basic zfs filesystem in the guest using all default tuning (128k
> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>
>         volblocksize    size amplification
>
>         512B            11.7x
>         4k                      1.45x
>         8k                      1.45x
>         16k                     1.5x
>         32k                     1.65x
>         64k                     1x
>         128k            1x
>
> The worst case is with a 512B volblocksize, where the space used is more
> than 11 times the size of the data stored within the guest. The size
> efficiency gains are non-linear as I continue from 4k and double the block
> sizes; 32k blocks being the second-worst. The amount of wasted space was
> minimized by using 64k and 128k blocks.
>
> It would appear that 64k is a good choice for volblocksize if you are
> using a zvol to back your VM, and the VM is using the virtual device for a
> zpool. Incidentally, I believe this is the default when creating VMs in
> FreeNAS.
>

I'm not sure what your purpose is behind the posting, but if its simply a
"why this behavior" you can find more detail here as well as some
calculation leg work:

https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz

-- 
Adam



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BtpaK3GpzcwvRFGoX5xdmwGnGWay0z_kqgW6Tg7hX5UBbz4og>