From owner-freebsd-virtualization@freebsd.org Tue Dec 5 03:37:45 2017 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 29C1BDEF11C for ; Tue, 5 Dec 2017 03:37:45 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-it0-x230.google.com (mail-it0-x230.google.com [IPv6:2607:f8b0:4001:c0b::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E4A3167D48 for ; Tue, 5 Dec 2017 03:37:44 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: by mail-it0-x230.google.com with SMTP id d137so11968133itc.2 for ; Mon, 04 Dec 2017 19:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=449o/xOWRPCMshk98BOpBaD6bOv99GrlC+dRCaAxOOU=; b=djIPqvqoxlE9gRpzVtYMB2JcI1rMg32nQyQ1Dvl8nE/XDnEKz00TBMnOuE5QOyhp2o CpJo0ErthZBC13yPb4nfjZX2ABTl/HkQfLJ/wmiO0DF6FAWXPDYjZwvCbEFiwNMOdGhf FJAxZsnQDOaUI6Of7LOQLGz/9qyJCgM15f3MBqW4smmrkqQoTfHilO2rUvxwyx7r1yRv WBwdVSNu67AhqsNu2VXTw6rBlgJ+y14YRi+V/j3uD5Cl7axKOBb5si6s59ipAi7NN4gC dEnFkJkEvh43iukDW4zbMw0Lb22HTU+2ekJ+vEWBL18430reB29xtKNAhB9iVKQRls0v dc+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=449o/xOWRPCMshk98BOpBaD6bOv99GrlC+dRCaAxOOU=; b=SPkbp3R3kMDmZsWqnK1dnB8UIdjkicH1UJQI3sJvSZefG1r0ZsFsBZQ1d2cZUAn0HC V3nN+8Bgd9n2KP/24+Mpt5r+kDO1tIqee6C3yk+GHKevPV+wXm2DISsOiTwI9uJG41Q3 V8Rz/Kg065yqUfiQ5raR+l9DhX+omj2Jydq58kS5d9qb1K5nGUAH2lRDb6P/vC7pwZQJ yW4d4PgjoGU5GOkYAjYZ/ot5SwrUiejBAGD9eOHAB+GPf9N6iGeP2u4sF282kk+DW+LE qrwZ3qKJOYe9isKkzACY9toUtwIgAVQXAxAdvVe1HT/6TIKS/lCcH5FURgaPQVyHVMSe nvIg== X-Gm-Message-State: AKGB3mKCBBkJxnAfKBSRu/InTEdgGQvXu3jAtaeki3dY3rbU85gp4fmd 3T9Uy/UgkxAv8IwpUT082B2679DJ2mM9os+6/1cSGw== X-Google-Smtp-Source: AGs4zMarholwikG5ftPjCAIlWFrSI6uHvLxbnuDY3ytliDGpWY115s26ZakTYDt55/kbo1Do+yzqjqeLcWbYvDjEwUM= X-Received: by 10.36.225.136 with SMTP id n130mr7325728ith.146.1512445063811; Mon, 04 Dec 2017 19:37:43 -0800 (PST) MIME-Version: 1.0 Received: by 10.2.138.114 with HTTP; Mon, 4 Dec 2017 19:37:43 -0800 (PST) In-Reply-To: References: From: Adam Vande More Date: Mon, 4 Dec 2017 21:37:43 -0600 Message-ID: Subject: Re: Storage overhead on zvols To: Dustin Wenz Cc: FreeBSD virtualization Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.25 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Dec 2017 03:37:45 -0000 On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz wrote: > I'm starting a new thread based on the previous discussion in "bhyve uses > all available memory during IO-intensive operations" relating to size > inflation of bhyve data stored on zvols. I've done some experimenting with > this, and I think it will be useful for others. > > The zvols listed here were created with this command: > > zfs create -o volmode=dev -o volblocksize=Xk -V 30g > vm00/chyves/guests/myguest/diskY > > The zvols were created on a raidz1 pool of four disks. For each zvol, I > created a basic zfs filesystem in the guest using all default tuning (128k > recordsize, etc). I then copied the same 8.2GB dataset to each filesystem. > > volblocksize size amplification > > 512B 11.7x > 4k 1.45x > 8k 1.45x > 16k 1.5x > 32k 1.65x > 64k 1x > 128k 1x > > The worst case is with a 512B volblocksize, where the space used is more > than 11 times the size of the data stored within the guest. The size > efficiency gains are non-linear as I continue from 4k and double the block > sizes; 32k blocks being the second-worst. The amount of wasted space was > minimized by using 64k and 128k blocks. > > It would appear that 64k is a good choice for volblocksize if you are > using a zvol to back your VM, and the VM is using the virtual device for a > zpool. Incidentally, I believe this is the default when creating VMs in > FreeNAS. > I'm not sure what your purpose is behind the posting, but if its simply a "why this behavior" you can find more detail here as well as some calculation leg work: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz -- Adam