From owner-freebsd-virtualization@freebsd.org Fri Mar 11 00:15:58 2016 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BAB52ACC6D8 for ; Fri, 11 Mar 2016 00:15:58 +0000 (UTC) (envelope-from mrqwer88@gmail.com) Received: from mail-yw0-x22c.google.com (mail-yw0-x22c.google.com [IPv6:2607:f8b0:4002:c05::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 7A78B1054 for ; Fri, 11 Mar 2016 00:15:58 +0000 (UTC) (envelope-from mrqwer88@gmail.com) Received: by mail-yw0-x22c.google.com with SMTP id g127so81670809ywf.2 for ; Thu, 10 Mar 2016 16:15:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ad50S5APsU14kgKMG7HILRucM/Ib7HljM3hifWHlRyo=; b=Rwoz+gWANMO79243NvU9QA2MO+ryix1T02SlnX64SCgedgM6Qh+H0wocA1nevnp9w1 tXUPuLvPC24iFyLqd8Gt+lH2qIQ2IeVm/oGFMxwSXvqR0GlYIHCk0LmM2GOOY1NAA8rk zaBo9NxojsNz3Mn/lrhvI/6a6xvv7e2khWk6OuAtK3QQK/f7lbnXAE2xSnq9mYTLz+3b tO3Fua7x0HI2sSGfXL4kfWNOM8vRmzT3+obJA/q+Iv7dzZQuoj3vIPlW/VrKFJQEWS52 1a3e3f9Hg5AgMLU8WA3QWHE6pCTHzXLpBSOrR776Ic/TVuinyoCEH5MFun9NPLAzH3eK j2XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ad50S5APsU14kgKMG7HILRucM/Ib7HljM3hifWHlRyo=; b=km0vGvvuRWXdHn0C7DnX40p7TAMlhK4TmZ+I2nFO0ZEujzo94VqK8kNBPnz9Y8zBjE k5TA2B42BrRItO4jj7yaMNv+8Kybhr7a5kYcvIr3UN0pTaxzi7EJmpprPCWmXQlmXehe s8OkrjWA0sgQaNYZWQYsidxXj8fIhsNip8YtL69LesbN+HUjLInNx+Vs/oWoV3wFcO59 aid/U166CQrLEJmAWK+PTuTj9WMSrw0TfL1rGM3j02XfZLu81ZuWUaz/90UQR0Tc34UI dISRKGZYr7EWwZqvs+F72br4prLcRH1tdyY8M84G40LLc41+aFrS4mzXLIb/QWO/dg99 KiiA== X-Gm-Message-State: AD7BkJIf252RFJYh9euH1TqboUuagMqm8eBvf95aZiCCsj2k4u9aR8OrtT5Qlr3g5OvwApkB+cYq7ZgSL/fsvw== X-Received: by 10.129.27.9 with SMTP id b9mr3553220ywb.173.1457655357418; Thu, 10 Mar 2016 16:15:57 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.77.194 with HTTP; Thu, 10 Mar 2016 16:15:37 -0800 (PST) In-Reply-To: <56E206FE.3080000@redbarn.org> References: <56E206FE.3080000@redbarn.org> From: =?UTF-8?B?0KHQtdGA0LPQtdC5INCc0LDQvNC+0L3QvtCy?= Date: Fri, 11 Mar 2016 03:15:37 +0300 Message-ID: Subject: Re: ZFS subvolume support inside Bhyve vm To: Paul Vixie Cc: Pavel Odintsov , "freebsd-virtualization@freebsd.org" , Sergei Mamonov Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Mar 2016 00:15:58 -0000 Hello! Yes - zvols looks awesome. But what driver you use for it? And what about disk usage overhead in guest? virtio-blk doesnt support fstrim (ahci-hd support it, but slower? "*At this point virtio-blk is indeed faster then ahci-hd on high IOPS*"). In linux && kvm we try used virtio-scsi driver with support fstrim, but how I see it not availble now in 10-2 stable for bhyve. And I not lonely with this question - https://lists.freebsd.org/pipermail/freebsd-virtualization/2015-March/003442.html 2016-03-11 2:45 GMT+03:00 Paul Vixie : > > > Pavel Odintsov wrote: > >> Hello, Dear Community! >> >> I would like to ask about plans for this storage engine approach. I like >> ZFS so much and we are storing about half petabyte of data here. >> >> But when we are speaking about vm's we should use zvols or even raw file >> based images and they are discarding all ZFS benefits. >> > > i use zvols for my bhyves and they have two of the most important zfs > advantages: > > 1. snapshots. > > root@mm1:/home/vixie # zfs list|grep fam >> zroot1/vms/family 55.7G 3.84T 5.34G - >> root@mm1:/home/vixie # zfs snap zroot1/vms/family@before >> >> [family.redbarn:amd64] touch /var/tmp/after >> >> root@mm1:/home/vixie # zfs snap zroot1/vms/family@after >> root@mm1:/home/vixie # mkdir /mnt/before /mnt/after >> root@mm1:/home/vixie # zfs clone zroot1/vms/family@before zroot1/before >> root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/beforep2 >> ... >> /dev/zvol/zroot1/beforep2: 264283 files, 1118905 used, 11575625 free >> (28697 frags, 1443366 blocks, 0.2% fragmentation) >> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before >> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/beforep2 /mnt/before >> >> root@mm1:/home/vixie # zfs clone zroot1/vms/family@after zroot1/after >> root@mm1:/home/vixie # fsck_ffs -p /dev/zvol/zroot1/afterp2 >> ... >> /dev/zvol/zroot1/afterp2: 264284 files, 1118905 used, 11575625 free >> (28697 frags, 1443366 blocks, 0.2% fragmentation) >> root@mm1:/home/vixie # mount -r /dev/zvol/zroot1/afterp2 /mnt/after >> >> root@mm1:/home/vixie # ls -l /mnt/{before,after}/var/tmp/after >> ls: /mnt/before/var/tmp/after: No such file or directory >> -rw-rw-r-- 1 vixie wheel 0 Mar 10 22:52 /mnt/after/var/tmp/after >> > > 2. storage redundancy, read caching, and write caching: > > root@mm1:/home/vixie # zpool status | tr -d '\t' >> pool: zroot1 >> state: ONLINE >> scan: scrub repaired 0 in 2h24m with 0 errors on Thu Mar 10 12:24:13 >> 2016 >> config: >> >> NAME STATE READ WRITE CKSUM >> zroot1 ONLINE 0 0 0 >> mirror-0 ONLINE 0 0 0 >> gptid/2427e651-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0 >> gptid/250b0f01-d9cc-11e3-b8a1-002590ea750a ONLINE 0 0 0 >> mirror-1 ONLINE 0 0 0 >> gptid/d35bb315-da08-11e3-b17f-002590ea750a ONLINE 0 0 0 >> gptid/d85ad8be-da08-11e3-b17f-002590ea750a ONLINE 0 0 0 >> logs >> mirror-2 ONLINE 0 0 0 >> ada0s1 ONLINE 0 0 0 >> ada1s1 ONLINE 0 0 0 >> cache >> ada0s2 ONLINE 0 0 0 >> ada1s2 ONLINE 0 0 0 >> >> errors: No known data errors >> > > so while i'd love to chroot a bhyve driver to some place in the middle of > the host's file system and then pass VFS right on through, more or less the > way mount_nullfs does, i am pretty comfortable with zvol UFS, and i think > it's misleading to say that zvol UFS lacks all ZFS benefits. > > -- > P Vixie > > _______________________________________________ > freebsd-virtualization@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to " > freebsd-virtualization-unsubscribe@freebsd.org" >