From owner-freebsd-virtualization@freebsd.org Sun Aug 23 16:53:58 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 263119C0AB1 for ; Sun, 23 Aug 2015 16:53:58 +0000 (UTC) (envelope-from daemon-user@freebsd.org) Received: from phabric-backend.isc.freebsd.org (phabric-backend.isc.freebsd.org [IPv6:2001:4f8:3:ffe0:406a:0:50:2]) by mx1.freebsd.org (Postfix) with ESMTP id 09B051731 for ; Sun, 23 Aug 2015 16:53:58 +0000 (UTC) (envelope-from daemon-user@freebsd.org) Received: by phabric-backend.isc.freebsd.org (Postfix, from userid 1346) id EB73A18326; Sun, 23 Aug 2015 16:53:57 +0000 (UTC) Date: Sun, 23 Aug 2015 16:53:57 +0000 To: freebsd-virtualization@freebsd.org From: "javier_ovi_yahoo.com (Javier Villavicencio)" Reply-to: D1944+333+b09c6235d993877b@reviews.freebsd.org Subject: [Differential] [Changed Subscribers] D1944: PF and VIMAGE fixes Message-ID: X-Priority: 3 X-Phabricator-Sent-This-Message: Yes X-Mail-Transport-Agent: MetaMTA X-Auto-Response-Suppress: All X-Phabricator-Mail-Tags: Thread-Topic: D1944: PF and VIMAGE fixes X-Herald-Rules: none X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-To: X-Phabricator-Cc: X-Phabricator-Cc: X-Phabricator-Cc: X-Phabricator-Cc: X-Phabricator-Cc: X-Phabricator-Cc: X-Phabricator-Cc: Precedence: bulk In-Reply-To: References: Thread-Index: NDc2NzM0MzY4OTdiYThiNTU1MjY2ZDZmMTJiIFXZ+qU= MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="utf-8" X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Aug 2015 16:53:58 -0000 javier_ovi_yahoo.com added a subscriber: javier_ovi_yahoo.com. REVISION DETAIL https://reviews.freebsd.org/D1944 EMAIL PREFERENCES https://reviews.freebsd.org/settings/panel/emailpreferences/ To: nvass-gmx.com, bz, trociny, kristof, gnn, zec, rodrigc, glebius, eri Cc: javier_ovi_yahoo.com, farrokhi, julian, robak, freebsd-virtualization-list, freebsd-pf-list, freebsd-net-list From owner-freebsd-virtualization@freebsd.org Sun Aug 23 21:00:26 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4BE769C08B1 for ; Sun, 23 Aug 2015 21:00:26 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 267AF1D45 for ; Sun, 23 Aug 2015 21:00:26 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7NL0QOs093939 for ; Sun, 23 Aug 2015 21:00:26 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201508232100.t7NL0QOs093939@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-virtualization@FreeBSD.org Subject: Problem reports for freebsd-virtualization@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 23 Aug 2015 21:00:26 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Aug 2015 21:00:26 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- New | 202321 | [bhyve,patch] More verbose error reporting in bhy New | 202322 | [bhyve,patch] add option to have bhyve write its 2 problems total for which you should take action. From owner-freebsd-virtualization@freebsd.org Mon Aug 24 20:06:31 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A1D529C2E6E for ; Mon, 24 Aug 2015 20:06:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8DA23E14 for ; Mon, 24 Aug 2015 20:06:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7OK6V5N088512 for ; Mon, 24 Aug 2015 20:06:31 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-virtualization@FreeBSD.org Subject: [Bug 195839] [bhyve] linux kernel loads, but bhyve core dumps (signal 11) Date: Mon, 24 Aug 2015 20:06:31 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: grehan@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-virtualization@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: resolution cc bug_status Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Aug 2015 20:06:31 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195839 Peter Grehan changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|--- |Works As Intended CC| |grehan@FreeBSD.org Status|New |Closed --- Comment #2 from Peter Grehan --- At leat in the first example, the amount of memory supplied to grub-bhyve (2G) is greater than that given to bhyve (1G). Since grub creates the memory map that is passed to the kernel, this will result in the guest kernel attempting to access memory that doesn't exist, which results in bhyve dumping core. The fix is to make sure the same amount of memory is given to both the loader and bhyve. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-virtualization@freebsd.org Mon Aug 24 20:08:45 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A25F29C2F60 for ; Mon, 24 Aug 2015 20:08:45 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8DDB5F2A for ; Mon, 24 Aug 2015 20:08:45 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7OK8jhY090709 for ; Mon, 24 Aug 2015 20:08:45 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-virtualization@FreeBSD.org Subject: [Bug 202321] [bhyve, patch] More verbose error reporting in bhyve for backing images Date: Mon, 24 Aug 2015 20:08:45 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: bin X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: patch X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: grehan@FreeBSD.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-virtualization@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: mfc-stable10? X-Bugzilla-Changed-Fields: bug_status cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Aug 2015 20:08:45 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202321 Peter Grehan changed: What |Removed |Added ---------------------------------------------------------------------------- Status|New |In Progress CC| |grehan@FreeBSD.org --- Comment #2 from Peter Grehan --- This is a useful change: I'll get it in. -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-virtualization@freebsd.org Tue Aug 25 14:54:51 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6BE769C20A4 for ; Tue, 25 Aug 2015 14:54:51 +0000 (UTC) (envelope-from s.tyshchenko@identika.pro) Received: from scale212.ru (scale212.ru [51.254.36.76]) (using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EA81689 for ; Tue, 25 Aug 2015 14:54:50 +0000 (UTC) (envelope-from s.tyshchenko@identika.pro) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=scale212.ru; s=default; h=Content-Type:List-Unsubscribe:Message-ID:Sender:From:Date:MIME-Version:Subject:To; bh=/BXZuJVa3yKVfjl06P++artTt+aPVFFYvsJen4hzfGI=; b=ldeKBHsS3CzAYlgDl2I4lE+C5sp3kIvD3F1AtTJrnej6VPsD8VNpmBGdhh5bJKihah/moZjGPOQeIGjXx2VIWB2o7AuwR6NqEhjIL7G3J8Mhxt/Ej+M+o946PfOfgyY5vjoQBt9QOWnGbpYdCFTeJZW4XT4PWlbKYkr7O0gyCPk=; Received: from root by scale212.ru with local (Exim 4.80) (envelope-from ) id 1ZUFce-0005Jv-TU for freebsd-virtualization@freebsd.org; Tue, 25 Aug 2015 16:54:48 +0200 To: freebsd-virtualization@freebsd.org Subject: For you MIME-Version: 1.0 Date: Tue, 25 Aug 2015 16:54:48 +0200 From: Sergey Tyshchenko Sender: s.tyshchenko@identika.pro Message-ID: <241792651.26531@scale212.ru> X-Priority: 3 X-Mailer: scale212.ru mailer. Ver. 1.1. Precedence: bulk Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: base64 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 14:54:51 -0000 Zm9yIHlvdQ0KCQkJDQoNCgkJCQ0KCQkJwqANCg0KCQkJwqANCg0KCQkJwqANCg0KCQkJwqANCg0K CQkJDQoJCQ0KCQkNCgkJCQ0KCQkJDQoJCQkNCgkJCUhlbGxvLCBNeSBuYW1lIGlzIFNlcmdleSwg SSBwcm9wb3NlIHlvdSBjb29wZXJhdGlvbi4gV2UgYXJlIGEgY3JlYXRpdmUgY29tcGFueSBpbiB0 aGUgZGV2ZWxvcG1lbnQgYW5kIGNyZWF0aW9uIG9mIHVuaXF1ZSBwcm9kdWN0cyBmb3IgZGVjb3Jh dGlvbiAtIHRvIGRlc2lnbiBidWlsZGluZ3MgSURFTlRJS0EuUFJPLiBXZScncmUgc3BlY2lhbGl6 ZWQgb24gZGVjb3JhdGlvbiBvZiBzaG9wcywgcGV0cm9sIHN0YXRpb24sIGNhZmUsIGZhc3QgZm9v ZCByZXN0YXVyYW50cywgYW5kIHJldGFpbCBmcmFuY2hpc2VzLldlIGFyZSBpbnRlcmVzdGVkIGF0 IGxvbmctdGVybSBjb29wZXJhdGlvbiB3aXRoIHlvdS4gV2UgcHJvcG9zZSBhIGdvb2QgZW52aXJv bm1lbnQgdG8gd29yayB3aXRoIHBhcnRuZXJzLkV4YW1wbGVzIG9mIG91ciB3b3JrIGh0dHAgOmh0 dHA6Ly9pZGVudGlrYS5wcm8vY291bnRlcl9saW5rL2NvdW50ZXIucGhwP2NsaWNrPXByZXNlbnRh dGlvbl9lbkkgcHJvcG9zZSBZb3UgdG8gYmVjb21lIG91ciBwYXJ0bmVyIGluIHlvdXIgYXJlYS5M ZXQgdXMga25vdyBhYm91dCB5b3VyIGRlY2lzaW9uDQoJCQkNCg0KCQkJDQoJCQnCoA0KDQoJCQnC oA0KDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQkNCgkJDQoJCQ0KCQkJDQoJCQkNCgkJCQ0KCQkJDQoJ CQkNCg0KCQkJDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQkNCgkJDQoJ CQ0KCQkJDQoJCQkNCgkJCQ0KCQkJDQoJCQkNCg0KCQkJDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQnC oA0KDQoJCQnCoA0KDQoJCQkNCgkJDQoJCQ0KCQkJDQoJCQkNCgkJCQ0KCQkJIEV4YW1wbGVzIG9m IG91ciB3b3JrIGh0dHAgOmh0dHA6Ly9pZGVudGlrYS5wcm8vY291bnRlcl9saW5rL2NvdW50ZXIu cGhwP2NsaWNrPXByZXNlbnRhdGlvbl9lbg0KCQkJDQoNCgkJCQ0KCQkJwqANCg0KCQkJwqANCg0K CQkJwqANCg0KCQkJwqANCg0KCQkJDQoJCQ0KCQkNCgkJCQ0KCQkJDQoJCQkNCgkJCVNlcmdleSBU eXNoY2hlbmtvQ0VPIHwgSURFTlRJS0EuUFJPVmliZXI6ICszODA1MDU1NjY5NjUgfCBXaGF0c0Fw cDogKzM4MDUwNTU2Njk2NVNreXBlOiB0LnNlcmdleS5tcy50eXNoY2hlbmtvQGlkZW50aWthLnBy byB8IHd3dy5pZGVudGlrYS5wcm8wMzA0MCB8IEdvbG9zaWl2c2t5aSBBdmUuIDcwIHwgb2ZmaWNl IDUwMiB8IEtpZXYgDQoJCQkNCg0KCQkJDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQnCoA0KDQoJCQnC oA== From owner-freebsd-virtualization@freebsd.org Tue Aug 25 18:18:04 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6B0A39C3B21 for ; Tue, 25 Aug 2015 18:18:04 +0000 (UTC) (envelope-from prasadjoshi.linux@gmail.com) Received: from mail-vk0-x233.google.com (mail-vk0-x233.google.com [IPv6:2607:f8b0:400c:c05::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 2D705E6D; Tue, 25 Aug 2015 18:18:04 +0000 (UTC) (envelope-from prasadjoshi.linux@gmail.com) Received: by vkd66 with SMTP id 66so77513593vkd.0; Tue, 25 Aug 2015 11:18:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=dmEKOtpG+7/jgX5AfmrnBoG/qj5Env9tkDsL8MKuzGA=; b=g1nb76E654yxk6Ffv8uZu7BDfZEOCf3TywxcMo7AIAFvoUzT0V+lCsglLD3wUzNzI1 sM0nF2p5e1g7yCEuj2GxJrTeKnJ3rtovcIMX1CmTJdd9MDboYjhOEcN2lf48aDwUUDwW UQp7vUICAyHkNxF2LrZmxZ7uxoxBG0W2qi0HYq7myBdiPSPwEksfcAO6NvMAvlIatctP Fcn3m5qw+TLbENMupjN9AW6JRBc2K0MDVoy2sXz39Evs1NLihF94pokEqOU/bhuTkP9A FLifRHmyDLGCZC7zmyxxPw3KsjwfgLylMe8hQWPtZeVNiYpxwn0pEI6ZYR79Rl4A0sp9 WnRQ== MIME-Version: 1.0 X-Received: by 10.52.119.133 with SMTP id ku5mr38587117vdb.16.1440526683055; Tue, 25 Aug 2015 11:18:03 -0700 (PDT) Received: by 10.31.50.6 with HTTP; Tue, 25 Aug 2015 11:18:02 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Aug 2015 23:48:02 +0530 Message-ID: Subject: Re: FreeBSD Quarterly Status Report - Second Quarter 2015 From: Prasad Joshi To: freebsd-virtualization@freebsd.org, Peter Grehan , Neel Natu , Tycho Nightingale , Allan Jude , Alexander Motin , Marcelo Araujo Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 18:18:04 -0000 Hello All, > bhyve > > Links > bhyve FAQ and talks > URL: http://www.bhyve.org > > Contact: Peter Grehan > Contact: Neel Natu > Contact: Tycho Nightingale > Contact: Allan Jude > Contact: Alexander Motin > Contact: Marcelo Araujo > > bhyve is a hypervisor that runs on the FreeBSD/amd64 platform. At > present, it runs FreeBSD (8.x or later), Linux i386/x64, OpenBSD > i386/amd64, and NetBSD/amd64 guests. Current development is focused on > enabling additional guest operating systems and implementing features > found in other hypervisors. > > bhyve BoF at BSDCan 2015 > > A bhyve BoF was held during lunch hour at BSDCan 2015. It was attended > by approximately 60 people. > > Michael Dexter showed Windows Server 2012 running inside bhyve. > > Common themes that came up during the discussion were: bhyve > configuration, libvirt and OpenStack integration, best practices, bhyve > with ZFS, additional guest support and live migration. > > Google Summer of Code 2015 > > A number of bhyve-related proposals were submitted for GSoC 2015 and > these four were accepted: > * NE2000 device emulation > * Porting bhyve to ARM > * ptnetmap support in bhyve > * PXE boot support in bhyveload > > A number of improvements were made to bhyve this quarter: > * GEOM storage backend now works properly with bhyve. > * Device model enhancements and new instruction emulations to support > Windows guests. > * Improve virtio-net performance by disabling queue notifications > when not needed. > * The dtrace FBT provider now works properly with vmm.ko. > > Marcelo Araujo and Allan Jude created a rough patch to make bhyve parse > a config file to replace the existing method of configuration by > command line invocation. The rapid pace of advancement in bhyve > resulted in requiring a much more complex config file. A new design for > the config file, with support for the plugin architecture that will > eventually be introduced into bhyve, is now being discussed. > > Open tasks: > > 1. Improve documentation. > 2. bhyveucl is a script for starting bhyve instances based on a libUCL > config file. More information at > https://github.com/allanjude/bhyveucl. > 3. Add support for virtio-scsi. I think virtio-scsi support is very interesting. I skimmed through FreeBSD source, it seems like virtio-scsi guest driver support is already present in FreeBSD. As far as I can understand, at the moment the virtio-scsi support is absent in bhyve host side. It seems like bhyve source implements virtio block interface, which bhyve uses to attach raw disks to VM. A similar functionality for virtio scsi devices has to be implemented. Please let me know if any one has already started working on this. I would be glad to help with development and/or testing. Thanks and Regards, Prasad > 4. Flexible networking backend: wanproxy, vhost-net > 5. Support running bhyve as non-root. > 6. Add filters for popular VM file formats (VMDK, VHD, QCOW2). > 7. Implement an abstraction layer for video (no X11 or SDL in base > system). > 8. Suspend/resume support. > 9. Live Migration. > 10. Nested VT-x support (bhyve in bhyve). > 11. Support for other architectures (ARM, MIPS, PPC). From owner-freebsd-virtualization@freebsd.org Tue Aug 25 18:19:41 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 027009C3BA5 for ; Tue, 25 Aug 2015 18:19:41 +0000 (UTC) (envelope-from prasadjoshi.linux@gmail.com) Received: from mail-vk0-x22c.google.com (mail-vk0-x22c.google.com [IPv6:2607:f8b0:400c:c05::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B52AFEEE; Tue, 25 Aug 2015 18:19:40 +0000 (UTC) (envelope-from prasadjoshi.linux@gmail.com) Received: by vkif69 with SMTP id f69so71067643vki.3; Tue, 25 Aug 2015 11:19:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=/ADfm/uoLveV8/RaPQ4PfhnHF1Yal8WU2rxo9LpbX3Q=; b=YeAfncP68O8fPKx2oGPGT3bfAcOJkdI7wn/l41oqyDfowq+CXyJkj2Vu4Hr+IgfYR3 1XnhXMIsb2d0exEKJ+P+8yqQSTDpVnXbVIjP80kP2O+BHD7DNZUx94Asz/HJ62XFgtWS I2k6HC4xj71XNUG0Uoo74eowgKmsXTGGxh+iVyI+zQ18gHEJUCQdcIND5zZNeKrHHi06 czbKWHINNxAfnIm4+Hs8dsYt1Wqbt5zzxgwqAE9cdrOKQKiZcJ+I97Uz9mdp+xtjFx4w rvbxdHQLM5XpKDRHtclzq06cIkkiFTvYcSHO6v3IRToab6eu9uz+91wwpQHr3QXduu6v /xVg== MIME-Version: 1.0 X-Received: by 10.53.6.38 with SMTP id cr6mr40282370vdd.54.1440526779789; Tue, 25 Aug 2015 11:19:39 -0700 (PDT) Received: by 10.31.50.6 with HTTP; Tue, 25 Aug 2015 11:19:39 -0700 (PDT) Date: Tue, 25 Aug 2015 23:49:39 +0530 Message-ID: Subject: bhyve virtio-scsi support From: Prasad Joshi To: freebsd-virtualization@freebsd.org, Peter Grehan , Neel Natu , Tycho Nightingale , Allan Jude , Alexander Motin , Marcelo Araujo Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 18:19:41 -0000 I missed to change the subject line. Thanks and Regards, Prasad On Tue, Aug 25, 2015 at 11:48 PM, Prasad Joshi wrote: > Hello All, > >> bhyve >> >> Links >> bhyve FAQ and talks >> URL: http://www.bhyve.org >> >> Contact: Peter Grehan >> Contact: Neel Natu >> Contact: Tycho Nightingale >> Contact: Allan Jude >> Contact: Alexander Motin >> Contact: Marcelo Araujo >> >> bhyve is a hypervisor that runs on the FreeBSD/amd64 platform. At >> present, it runs FreeBSD (8.x or later), Linux i386/x64, OpenBSD >> i386/amd64, and NetBSD/amd64 guests. Current development is focused on >> enabling additional guest operating systems and implementing features >> found in other hypervisors. >> >> bhyve BoF at BSDCan 2015 >> >> A bhyve BoF was held during lunch hour at BSDCan 2015. It was attended >> by approximately 60 people. >> >> Michael Dexter showed Windows Server 2012 running inside bhyve. >> >> Common themes that came up during the discussion were: bhyve >> configuration, libvirt and OpenStack integration, best practices, bhyve >> with ZFS, additional guest support and live migration. >> >> Google Summer of Code 2015 >> >> A number of bhyve-related proposals were submitted for GSoC 2015 and >> these four were accepted: >> * NE2000 device emulation >> * Porting bhyve to ARM >> * ptnetmap support in bhyve >> * PXE boot support in bhyveload >> >> A number of improvements were made to bhyve this quarter: >> * GEOM storage backend now works properly with bhyve. >> * Device model enhancements and new instruction emulations to support >> Windows guests. >> * Improve virtio-net performance by disabling queue notifications >> when not needed. >> * The dtrace FBT provider now works properly with vmm.ko. >> >> Marcelo Araujo and Allan Jude created a rough patch to make bhyve parse >> a config file to replace the existing method of configuration by >> command line invocation. The rapid pace of advancement in bhyve >> resulted in requiring a much more complex config file. A new design for >> the config file, with support for the plugin architecture that will >> eventually be introduced into bhyve, is now being discussed. >> >> Open tasks: >> >> 1. Improve documentation. >> 2. bhyveucl is a script for starting bhyve instances based on a libUCL >> config file. More information at >> https://github.com/allanjude/bhyveucl. >> 3. Add support for virtio-scsi. > > I think virtio-scsi support is very interesting. > > I skimmed through FreeBSD source, it seems like virtio-scsi guest > driver support is already present in FreeBSD. As far as I can > understand, at the moment the virtio-scsi support is absent in bhyve > host side. It seems like bhyve source implements virtio block > interface, which bhyve uses to attach raw disks to VM. A similar > functionality for virtio scsi devices has to be implemented. Please > let me know if any one has already started working on this. I would be > glad to help with development and/or testing. > > Thanks and Regards, > Prasad > >> 4. Flexible networking backend: wanproxy, vhost-net >> 5. Support running bhyve as non-root. >> 6. Add filters for popular VM file formats (VMDK, VHD, QCOW2). >> 7. Implement an abstraction layer for video (no X11 or SDL in base >> system). >> 8. Suspend/resume support. >> 9. Live Migration. >> 10. Nested VT-x support (bhyve in bhyve). >> 11. Support for other architectures (ARM, MIPS, PPC). From owner-freebsd-virtualization@freebsd.org Tue Aug 25 18:53:04 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 98CE799A9C6 for ; Tue, 25 Aug 2015 18:53:04 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: from mail-lb0-x236.google.com (mail-lb0-x236.google.com [IPv6:2a00:1450:4010:c04::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 22EE2151 for ; Tue, 25 Aug 2015 18:53:04 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: by lbbpu9 with SMTP id pu9so105904530lbb.3 for ; Tue, 25 Aug 2015 11:53:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:date:message-id:subject:from:to:content-type; bh=tq/d6fzpfpD9TxNyPPgXFxgOEJ9nqunQi1YCZ9tkfFQ=; b=U/9qNkAy2f5gZXIbw5cX+b04iN8aQJnmS5sTogXHfW0wBA2Q7W8/O/ZT7SaggcNE4v CFbY6G435kw3izKh+u5m/DN41kmtOovD93/dDSk8c0s05EW219kFif8ERio0aKKvbIqd /edj3DALFjDYy7l43+vrBUTNEog/lj6n52/MVeXJpItYHNDzs7CsFPc+Gay+c0fTAuCg vg0LwBrOJLFkPuEyHvlRV/RQpm8g2TrUpCY18Fkpt7byWybOL9loS4DXKMVsT2oWsIso KGe0XEAHKNylaG4qbFVKHfx6BtjVDT/ubrYJ+uiL2J7Q0NyJ6S4NVUduaPBUZTFbHcgb sg2A== MIME-Version: 1.0 X-Received: by 10.112.141.8 with SMTP id rk8mr26393596lbb.87.1440528782324; Tue, 25 Aug 2015 11:53:02 -0700 (PDT) Sender: crodr001@gmail.com Received: by 10.112.143.5 with HTTP; Tue, 25 Aug 2015 11:53:02 -0700 (PDT) Date: Tue, 25 Aug 2015 11:53:02 -0700 X-Google-Sender-Auth: hJyBv00WoTDrIKIVEyawYwyG2-k Message-ID: Subject: passthru requires guest memory to be wired From: Craig Rodrigues To: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 18:53:04 -0000 Hi, I updated one of my FreeBSD boxes to latest CURRENT, and when I started a bhyve VM which uses PCI passthru, I got this error: passthru requires guest memory to be wired According to this commit: https://reviews.freebsd.org/rS284539 '-S' needs to be passed to bhyveload *and* bhyve if PCI passthru is used. It looks like this change did not make it to 10.2R. Will this change go into stable/10? Should this info be added to: https://wiki.freebsd.org/bhyve/pci_passthru -- Craig From owner-freebsd-virtualization@freebsd.org Tue Aug 25 22:45:33 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 384C199A654 for ; Tue, 25 Aug 2015 22:45:33 +0000 (UTC) (envelope-from neelnatu@gmail.com) Received: from mail-qg0-x22e.google.com (mail-qg0-x22e.google.com [IPv6:2607:f8b0:400d:c04::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EC293E64; Tue, 25 Aug 2015 22:45:32 +0000 (UTC) (envelope-from neelnatu@gmail.com) Received: by qgj62 with SMTP id 62so115994372qgj.2; Tue, 25 Aug 2015 15:45:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=4xIch5l4kqvUdxvHOmlxYl2NApPhjcmnQh8DLeoTmw8=; b=zvFFM6wPznitYxP8NBv2KAAcemKsNJPI5/WTuTh21CAJWeHlYUGxKCfMD4sIrnvP0z DObTcjDau6/WE9j7T/SkAYpmOaN7Za8hPPMWH0jjqfJQYVAkyuHtgQFUnVkAHWe34i6b Gr2vRLS3O+LzJYTzL4zLwBEH3k5KquUmEcql7BbDDQeo8KAdVgZpKjX+DTLBd6AJYr9Q boAgWARSvOkq2FOov8NiRiUpl0lXQRtlVWN4T7sffeTQdohI2tdyZ6lAyQGOe2UEUg8Q oAMDnGSCKmojGJwGMN8LEj44h8ceXKjd855Ue4SMqlL9TgatyRlcbpIQXrkDm7kNTnYU i7Lg== MIME-Version: 1.0 X-Received: by 10.140.144.83 with SMTP id 80mr72070980qhq.54.1440542732012; Tue, 25 Aug 2015 15:45:32 -0700 (PDT) Received: by 10.140.98.163 with HTTP; Tue, 25 Aug 2015 15:45:31 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Aug 2015 15:45:31 -0700 Message-ID: Subject: Re: passthru requires guest memory to be wired From: Neel Natu To: Craig Rodrigues Cc: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 22:45:33 -0000 Hi Craig, On Tue, Aug 25, 2015 at 11:53 AM, Craig Rodrigues wrote: > Hi, > > I updated one of my FreeBSD boxes to latest CURRENT, and when I started > a bhyve VM which uses PCI passthru, I got this error: > > passthru requires guest memory to be wired > > According to this commit: > https://reviews.freebsd.org/rS284539 > > '-S' needs to be passed to bhyveload *and* bhyve if PCI passthru is used. > > It looks like this change did not make it to 10.2R. Will this change go > into stable/10? > No, there are no plans to MFC the change. > Should this info be added to: https://wiki.freebsd.org/bhyve/pci_passthru > Yup, added it just now. Thanks for pointing it out. best Neel > -- > Craig > _______________________________________________ > freebsd-virtualization@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to "freebsd-virtualization-unsubscribe@freebsd.org" From owner-freebsd-virtualization@freebsd.org Tue Aug 25 23:38:31 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6DB279C2DDC for ; Tue, 25 Aug 2015 23:38:31 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: from mail-la0-x22f.google.com (mail-la0-x22f.google.com [IPv6:2a00:1450:4010:c03::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E8C3FD04 for ; Tue, 25 Aug 2015 23:38:30 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: by laba3 with SMTP id a3so108888799lab.1 for ; Tue, 25 Aug 2015 16:38:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=hoauOBUSeQXAkEgZTQE0ApysJRYkLupqLg/SSKAyhqk=; b=UmVAoOVSFA58rtnxjw5MlA/qMV6oRkN1xY3cDmg4xQZ6473G/olKl/NLlJ/L8uqT3J scjodiGr/xXhojterK6bulLPzCZ1gR429QZX3RsSzq5E0IiSj0kwIFMg1eMBd3qWiqCe UXAx+ocZCTLFJPOxdIrW/0lUZBk9C4lUj8o73wrQjnCUOuZydn1F8aVb9021ILurX0tP PU4aOdK2I4h9daiqLIMuqxKXwfMXg75Y/biKAc+ajpnl1pB0CmpvJcbXivAXsssHZxVa h0ybXvsb6kZVf5dUPQ2WIHIryQVUGgwEYpY1ygzwZJQTGwYwM3js3bge6pey9FU8CPqw q8Kg== MIME-Version: 1.0 X-Received: by 10.152.18.232 with SMTP id z8mr28246475lad.66.1440545908705; Tue, 25 Aug 2015 16:38:28 -0700 (PDT) Sender: crodr001@gmail.com Received: by 10.112.143.5 with HTTP; Tue, 25 Aug 2015 16:38:28 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Aug 2015 16:38:28 -0700 X-Google-Sender-Auth: VxXElT4wKEFGoSZOLXK4J9zs9xs Message-ID: Subject: Re: passthru requires guest memory to be wired From: Craig Rodrigues To: Neel Natu Cc: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Aug 2015 23:38:31 -0000 On Tue, Aug 25, 2015 at 3:45 PM, Neel Natu wrote: > Hi Craig, > > > No, there are no plans to MFC the change. > > > Should this info be added to: > https://wiki.freebsd.org/bhyve/pci_passthru > > > > Yup, added it just now. Thanks for pointing it out. > > Thanks. I would recommend that you also make the error message slightly more "bozo-friendly". Changing: "passthru requires guest memory to be wired" to "passthru requires guest memory to be wired, please use -S" or something similar. That would eliminate one round-trip of looking up the man page and docs to figure out what is wrong. -- Craig From owner-freebsd-virtualization@freebsd.org Wed Aug 26 00:28:14 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7D26699A41B for ; Wed, 26 Aug 2015 00:28:14 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: from mail-la0-x22d.google.com (mail-la0-x22d.google.com [IPv6:2a00:1450:4010:c03::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 05C1F672 for ; Wed, 26 Aug 2015 00:28:14 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: by labgv11 with SMTP id gv11so41525217lab.2 for ; Tue, 25 Aug 2015 17:28:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=XdEYQ+uTrBEf4OJ2p8HfXrcvjvgiKcZt0vqfwySRPDw=; b=f04Q32pgHruJReAc+lBNv7ziH+00fKZEiP7AsyExVoZHilYKgGK/pJMwBLksCNufTf VCpL1IxNkZekq/76SzhBtDaK7yTjEIhUnPkWpP8LEv49YoAn9SX72LHsDo4WRv5+M+xY TsTxA7I6lsmXqiXdhypyhJ1wDQOvYGrpyEy0hXCsNlwld/uaWR8LbHpZSovzPbBdDUle az5iWGKwpmUbb+p1LNYXOosXx+vLczWtwALuPL7i0bMf0OojH8ihdKBZ0zinO7T2Qrwj h9KZrD8Rh3p/Yu3CMIgVXushO+pq99xmyysie4IEeb8MX7MvHzRvLtypo8oRxkNF97z8 wRBw== MIME-Version: 1.0 X-Received: by 10.152.18.232 with SMTP id z8mr28405704lad.66.1440548891021; Tue, 25 Aug 2015 17:28:11 -0700 (PDT) Sender: crodr001@gmail.com Received: by 10.112.143.5 with HTTP; Tue, 25 Aug 2015 17:28:10 -0700 (PDT) In-Reply-To: References: Date: Tue, 25 Aug 2015 17:28:10 -0700 X-Google-Sender-Auth: XqQ6mMIzU4T6MbMvscMl2wliN4U Message-ID: Subject: Re: passthru requires guest memory to be wired From: Craig Rodrigues To: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Aug 2015 00:28:14 -0000 On Tue, Aug 25, 2015 at 11:53 AM, Craig Rodrigues wrote: > > According to this commit: > https://reviews.freebsd.org/rS284539 > > '-S' needs to be passed to bhyveload *and* bhyve if PCI passthru is used. > > Does grub-bhyve need this as well? -- Craig From owner-freebsd-virtualization@freebsd.org Wed Aug 26 00:29:50 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8321699A48A for ; Wed, 26 Aug 2015 00:29:50 +0000 (UTC) (envelope-from grehan@freebsd.org) Received: from iredmail.onthenet.com.au (iredmail.onthenet.com.au [203.13.68.150]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4528B6D3 for ; Wed, 26 Aug 2015 00:29:50 +0000 (UTC) (envelope-from grehan@freebsd.org) Received: from localhost (iredmail.onthenet.com.au [127.0.0.1]) by iredmail.onthenet.com.au (Postfix) with ESMTP id AA7B82811AB for ; Wed, 26 Aug 2015 10:29:47 +1000 (AEST) X-Amavis-Modified: Mail body modified (using disclaimer) - iredmail.onthenet.com.au Received: from iredmail.onthenet.com.au ([127.0.0.1]) by localhost (iredmail.onthenet.com.au [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id OVK1R325ghjW for ; Wed, 26 Aug 2015 10:29:47 +1000 (AEST) Received: from Peters-MacBook-Pro.local (unknown [64.245.0.210]) by iredmail.onthenet.com.au (Postfix) with ESMTPSA id A34F52811A5; Wed, 26 Aug 2015 10:29:44 +1000 (AEST) Subject: Re: passthru requires guest memory to be wired To: Craig Rodrigues References: Cc: "freebsd-virtualization@freebsd.org" From: Peter Grehan Message-ID: <55DD0876.5070207@freebsd.org> Date: Tue, 25 Aug 2015 17:29:42 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Aug 2015 00:29:50 -0000 Hi Craig, >> '-S' needs to be passed to bhyveload *and* bhyve if PCI passthru is used. >> >> > Does grub-bhyve need this as well? It does: I need to commit the change for this. later, Peter. From owner-freebsd-virtualization@freebsd.org Wed Aug 26 21:25:54 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B3F149C2B2D for ; Wed, 26 Aug 2015 21:25:54 +0000 (UTC) (envelope-from vivek@khera.org) Received: from mail-wi0-x229.google.com (mail-wi0-x229.google.com [IPv6:2a00:1450:400c:c05::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4C7AB1451 for ; Wed, 26 Aug 2015 21:25:54 +0000 (UTC) (envelope-from vivek@khera.org) Received: by wicja10 with SMTP id ja10so56458186wic.1 for ; Wed, 26 Aug 2015 14:25:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=khera.org; s=google11; h=mime-version:date:message-id:subject:from:to:content-type; bh=QJQkTJVCRj18givgZtOqSU+0qxLHTOmqUJ+ti6mjnUQ=; b=OD1r8Iy0Zan0ChwNxMlZ1qj2/fRDqiRGg6W+7Tgm2SaIHP72B6MrmkEeisjsY1Jkm/ f9P+6kfzuY6T0SskKAmVURwMUgk7JuZpAXTqlkjCQ8Ji/3cpJPR0JvtAjVqkhQMUfvo+ 0UJor6q1SZ8O9maN8xtJUhd1itusRjWDEtMjc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=QJQkTJVCRj18givgZtOqSU+0qxLHTOmqUJ+ti6mjnUQ=; b=Kmbom/G5Pg8Mq157XnuFX8E3qHN4u//kzW+Lg0cc39EGCNcv2a4JaROdd/FY+g7WIZ sQ/c3k6CaNc+xj7sG49hxY6epbqhbINPGP3HFt7HS1XwBfKGR4EaAuYOHGCgFSfIChFi PsdMiO3eF5DcgXPpjCEDCCiXtgqARpP6z5tiHL34xBfdI1RULe3MFy0LNUaUSEILcpeS STQxPXxI/5EODYs4r8D4TiZ0ztT4YRhKFydOEmgL0PVpxqqs768vC58Hiw2xD3gIrSeG NknskL4hLHLpKAT9YvFmW5lPkYwMcjg/hbPkypa5m5nfImsHqi2SSgBH1hYAIh1y1Pck QshQ== X-Gm-Message-State: ALoCoQl7u8B64Eb61hL8WjnfuBmCeB0RGRSBIVoEI5rgyKX9ddrbVBprI4Ld3OtQagji9wugXTRp MIME-Version: 1.0 X-Received: by 10.180.187.137 with SMTP id fs9mr15662728wic.10.1440624352805; Wed, 26 Aug 2015 14:25:52 -0700 (PDT) Received: by 10.28.188.132 with HTTP; Wed, 26 Aug 2015 14:25:52 -0700 (PDT) Date: Wed, 26 Aug 2015 17:25:52 -0400 Message-ID: Subject: Options for zfs inside a VM backed by zfs on the host From: Vick Khera To: freebsd-virtualization@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Aug 2015 21:25:54 -0000 I'm running FreeBSD inside a VM that is providing the virtual disks backed by several ZFS zvols on the host. I want to run ZFS on the VM itself too for simplified management and backup purposes. The question I have is on the VM guest, do I really need to run a raid-z or mirror or can I just use a single virtual disk (or even a stripe)? Given that the underlying storage for the virtual disk is a zvol on a raid-z there should not really be too much worry for data corruption, I would think. It would be equivalent to using a hardware raid for each component of my zfs pool. Opinions? Preferably well-reasoned ones. :) From owner-freebsd-virtualization@freebsd.org Thu Aug 27 06:18:18 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 953C59C2BB6 for ; Thu, 27 Aug 2015 06:18:18 +0000 (UTC) (envelope-from marcus@odin.blazingdot.com) Received: from odin.blazingdot.com (odin.blazingdot.com [204.109.60.170]) by mx1.freebsd.org (Postfix) with ESMTP id ED2D2A9 for ; Thu, 27 Aug 2015 06:18:17 +0000 (UTC) (envelope-from marcus@odin.blazingdot.com) Received: by odin.blazingdot.com (Postfix, from userid 1001) id 215011324B2; Wed, 26 Aug 2015 23:10:44 -0700 (PDT) Date: Wed, 26 Aug 2015 23:10:44 -0700 From: Marcus Reid To: Vick Khera Cc: freebsd-virtualization@freebsd.org Subject: Re: Options for zfs inside a VM backed by zfs on the host Message-ID: <20150827061044.GA10221@blazingdot.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Coffee-Level: nearly-fatal User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 06:18:18 -0000 On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: > I'm running FreeBSD inside a VM that is providing the virtual disks backed > by several ZFS zvols on the host. I want to run ZFS on the VM itself too > for simplified management and backup purposes. > > The question I have is on the VM guest, do I really need to run a raid-z or > mirror or can I just use a single virtual disk (or even a stripe)? Given > that the underlying storage for the virtual disk is a zvol on a raid-z > there should not really be too much worry for data corruption, I would > think. It would be equivalent to using a hardware raid for each component > of my zfs pool. > > Opinions? Preferably well-reasoned ones. :) This is a frustrating situation, because none of the options that I can think of look particularly appealing. Single-vdev pools would be the best option, your redundancy is already taken care of by the host's pool. The overhead of checksumming, etc. twice is probably not super bad. However, having the ARC eating up lots of memory twice seems pretty bletcherous. You can probably do some tuning to reduce that, but I never liked tuning the ARC much. All the nice features ZFS brings to the table is hard to give up once you get used to having them around, so I understand your quandry. Marcus From owner-freebsd-virtualization@freebsd.org Thu Aug 27 06:20:15 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D0A859C2C7B for ; Thu, 27 Aug 2015 06:20:15 +0000 (UTC) (envelope-from marcus@odin.blazingdot.com) Received: from odin.blazingdot.com (odin.blazingdot.com [204.109.60.170]) by mx1.freebsd.org (Postfix) with ESMTP id B99F6143 for ; Thu, 27 Aug 2015 06:20:15 +0000 (UTC) (envelope-from marcus@odin.blazingdot.com) Received: by odin.blazingdot.com (Postfix, from userid 1001) id 6ED251324B2; Wed, 26 Aug 2015 23:20:15 -0700 (PDT) Date: Wed, 26 Aug 2015 23:20:15 -0700 From: Marcus Reid To: Vick Khera Cc: freebsd-virtualization@freebsd.org Subject: Re: Options for zfs inside a VM backed by zfs on the host Message-ID: <20150827062015.GA10272@blazingdot.com> References: <20150827061044.GA10221@blazingdot.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150827061044.GA10221@blazingdot.com> X-Coffee-Level: nearly-fatal User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 06:20:15 -0000 On Wed, Aug 26, 2015 at 11:10:44PM -0700, Marcus Reid wrote: > On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: > > Opinions? Preferably well-reasoned ones. :) > > However, having the ARC eating up lots of memory twice seems pretty > bletcherous. You can probably do some tuning to reduce that, but I > never liked tuning the ARC much. I just realized that you can turn primarycache off per-dataset. Does it make more sense to turn primarycache=none on the zvol on the host, or on the datasets in the vm? I'm thinking on the host, but it might be worth experimenting. Marcus From owner-freebsd-virtualization@freebsd.org Thu Aug 27 09:42:28 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2E4749C4CE3 for ; Thu, 27 Aug 2015 09:42:28 +0000 (UTC) (envelope-from matt.churchyard@userve.net) Received: from smtp-outbound.userve.net (smtp-outbound.userve.net [217.196.1.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.userve.net", Issuer "Go Daddy Secure Certificate Authority - G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D3ACF791 for ; Thu, 27 Aug 2015 09:42:27 +0000 (UTC) (envelope-from matt.churchyard@userve.net) Received: from owa.usd-group.com (owa.usd-group.com [217.196.1.2]) by smtp-outbound.userve.net (8.15.1/8.15.1) with ESMTPS id t7R9VuHU050816 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=FAIL); Thu, 27 Aug 2015 10:31:56 +0100 (BST) (envelope-from matt.churchyard@userve.net) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=userve.net; s=201508; t=1440667919; bh=Hjul30xjdPl/DzeFkEBcXJBfmRfhhk4FqVmhHBwZvxw=; h=From:To:CC:Subject:Date:References:In-Reply-To; b=gpFYCcO8JScdw1pMGR/uR4XvzD7YyqVjAHT8aqgTQwupnpY44cHAnU90KLvn9kPJf 0iHTU6jEGJOZoqX6ZWB9tHOWDimRQLLCupT2u3lLCQY/+qI/AlqzMfeOLrd1OI5aYM sI5ekuwnlLF8nmiZcI0BHUpyEh5ndxzHHo20ZPvU= Received: from SERVER.ad.usd-group.com (192.168.0.1) by SERVER.ad.usd-group.com (192.168.0.1) with Microsoft SMTP Server (TLS) id 15.0.847.32; Thu, 27 Aug 2015 10:31:50 +0100 Received: from SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9]) by SERVER.ad.usd-group.com ([fe80::b19d:892a:6fc7:1c9%12]) with mapi id 15.00.0847.030; Thu, 27 Aug 2015 10:31:50 +0100 From: Matt Churchyard To: Marcus Reid , Vick Khera CC: "freebsd-virtualization@freebsd.org" Subject: RE: Options for zfs inside a VM backed by zfs on the host Thread-Topic: Options for zfs inside a VM backed by zfs on the host Thread-Index: AQHQ4EXX3kz9NYJz1USC2V7pGvPRBp4fTOcAgAACqYCAAEI88A== Date: Thu, 27 Aug 2015 09:31:50 +0000 Message-ID: <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> In-Reply-To: <20150827062015.GA10272@blazingdot.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.0.10] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 09:42:28 -0000 > On Wed, Aug 26, 2015 at 11:10:44PM -0700, Marcus Reid wrote: > On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: > > > Opinions? Preferably well-reasoned ones. :) > > >=20 > > However, having the ARC eating up lots of memory twice seems pretty=20 > > bletcherous. You can probably do some tuning to reduce that, but I=20 > > never liked tuning the ARC much. > I just realized that you can turn primarycache off per-dataset. Does it = make more sense to turn primarycache=3Dnone on the zvol on the host, or > o= n the datasets in the vm? I'm thinking on the host, but it might be worth = experimenting. I'd be very wary of disabling ARC on the main host, it can have pretty seri= ous side effects. It could possibly be useful in the guest though, as data = should be cached already by ARC on the host, you're just going through an e= xtra step of reading through the virtual disk driver, and into host ARC, in= stead of directly from the guest memory. Would need testing to know what pe= rformance was like and if there are any side effects. I do agree that it doesn't seem unnecessary to have any redundancy in the g= uest if the host pool is redundant. Save for any glaring bugs in the virtua= l disk emulation, you wouldn't expect to get errors on the guest pool if th= e host pool is already checksumming the data. It's also worth testing with guest ARC enabled but just limited to a fairly= small size, so you're not disabling it entirely, but doing at little doubl= e-caching as possible. ZFS features seems perfect for virtual hosts, although it's not ideal that = you have to give up a big chunk of host RAM for ARC. You may also find that= you need to limit host ARC, then only use "MAX_RAM - MY_ARC_LIMIT" for gue= sts. Otherwise you'll have ZFS and VMs fighting for memory and enough of us= have seen what shouldn't, but unfortunately does happen in that situation. Matt - > Marcus > _______________________________________________ > freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org= /mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to "freebsd-virtualization-unsubscribe@free= bsd.org" From owner-freebsd-virtualization@freebsd.org Thu Aug 27 10:10:47 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AE8629C3669 for ; Thu, 27 Aug 2015 10:10:47 +0000 (UTC) (envelope-from marieheleneka@gmail.com) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 44F4713B9 for ; Thu, 27 Aug 2015 10:10:47 +0000 (UTC) (envelope-from marieheleneka@gmail.com) Received: by widdq5 with SMTP id dq5so72931909wid.1 for ; Thu, 27 Aug 2015 03:10:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-type; bh=AoHIhFw2FvlO51tNsDwjsWuYPVlOOf1XhzqxiH1Zx2g=; b=E7nAiDLriFnhDuKcgtWb7FL88zDXKbsN/UierWsptzCoOggCdkqWskBMFZ7w4fuJjZ XWitG/reHn93GYHGky2Tq3ENDkU3VoeB8mzM97ZI6Rj4n0xUkVLsPQXEAER1HaVmysia YF0OtyuDBpD1X6q/fQaaoMfUKjxoO2JH9YYUtuqvPk1FQd1+B6j9Ttxf9gBo6o2TRqN8 OflFNrP4dJvl1s6Ult2mp00X6IA7AmNXzpKLhd2e29Vc1dHtP8VvyNZvQSHW/Xf4EA5u H89BOrg+TmxQzrl7NTF8tvby5R/LhbSNwFIxYU275IOF7oiRDpKijnU0elc4w15VrL14 qYrA== X-Received: by 10.194.83.70 with SMTP id o6mr4079681wjy.44.1440670245672; Thu, 27 Aug 2015 03:10:45 -0700 (PDT) MIME-Version: 1.0 References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> In-Reply-To: <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> From: Marie Date: Thu, 27 Aug 2015 10:10:36 +0000 Message-ID: Subject: Re: Options for zfs inside a VM backed by zfs on the host To: Matt Churchyard , Marcus Reid , Vick Khera Cc: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 10:10:47 -0000 On Thu, Aug 27, 2015 at 11:42 AM Matt Churchyard via freebsd-virtualization wrote: > > On Wed, Aug 26, 2015 at 11:10:44PM -0700, Marcus Reid wrote: > > On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: > > > > Opinions? Preferably well-reasoned ones. :) > > > > > > > However, having the ARC eating up lots of memory twice seems pretty > > > bletcherous. You can probably do some tuning to reduce that, but I > > > never liked tuning the ARC much. > > > I just realized that you can turn primarycache off per-dataset. Does it > make more sense to turn primarycache=none on the zvol on the host, or > on > the datasets in the vm? I'm thinking on the host, but it might be worth > experimenting. > > I'd be very wary of disabling ARC on the main host, it can have pretty > serious side effects. It could possibly be useful in the guest though, as > data should be cached already by ARC on the host, you're just going through > an extra step of reading through the virtual disk driver, and into host > ARC, instead of directly from the guest memory. Would need testing to know > what performance was like and if there are any side effects. > > I do agree that it doesn't seem unnecessary to have any redundancy in the > guest if the host pool is redundant. Save for any glaring bugs in the > virtual disk emulation, you wouldn't expect to get errors on the guest pool > if the host pool is already checksumming the data. > > It's also worth testing with guest ARC enabled but just limited to a > fairly small size, so you're not disabling it entirely, but doing at little > double-caching as possible. > > ZFS features seems perfect for virtual hosts, although it's not ideal that > you have to give up a big chunk of host RAM for ARC. You may also find that > you need to limit host ARC, then only use "MAX_RAM - MY_ARC_LIMIT" for > guests. Otherwise you'll have ZFS and VMs fighting for memory and enough of > us have seen what shouldn't, but unfortunately does happen in that > situation. > > Matt > - > > > Marcus > I've tried this in the past, and found the worst performance penalty was with ARC disabled in guest. I tried with ARC enabled on host and guest, only on host, only on guest. There was a significant performance penalty with either ARC disabled. I'd still recommend to experiment with it on your own to see if the hit is acceptable or not. Shameless plug: I'm working on a project (tunnelfs.io) which should be useful for this use case. :) Unfortunately, there is no ETA on usable code yet. -- Marie Helene Kvello-Aune From owner-freebsd-virtualization@freebsd.org Thu Aug 27 14:46:29 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0176B9C3DC5 for ; Thu, 27 Aug 2015 14:46:29 +0000 (UTC) (envelope-from allanjude@freebsd.org) Received: from mx1.scaleengine.net (mx1.scaleengine.net [209.51.186.6]) by mx1.freebsd.org (Postfix) with ESMTP id B8226120C for ; Thu, 27 Aug 2015 14:46:28 +0000 (UTC) (envelope-from allanjude@freebsd.org) Received: from [10.1.1.2] (unknown [10.1.1.2]) (Authenticated sender: allanjude.freebsd@scaleengine.com) by mx1.scaleengine.net (Postfix) with ESMTPSA id 51AD893BA for ; Thu, 27 Aug 2015 14:46:22 +0000 (UTC) Subject: Re: Options for zfs inside a VM backed by zfs on the host To: freebsd-virtualization@freebsd.org References: <20150827061044.GA10221@blazingdot.com> From: Allan Jude Message-ID: <55DF22D3.1040302@freebsd.org> Date: Thu, 27 Aug 2015 10:46:43 -0400 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <20150827061044.GA10221@blazingdot.com> Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="oaFB5gvVXRR7m9QmbFDpWecE7MhHSsq3P" X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 14:46:29 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --oaFB5gvVXRR7m9QmbFDpWecE7MhHSsq3P Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 2015-08-27 02:10, Marcus Reid wrote: > On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: >> I'm running FreeBSD inside a VM that is providing the virtual disks ba= cked >> by several ZFS zvols on the host. I want to run ZFS on the VM itself t= oo >> for simplified management and backup purposes. >> >> The question I have is on the VM guest, do I really need to run a raid= -z or >> mirror or can I just use a single virtual disk (or even a stripe)? Giv= en >> that the underlying storage for the virtual disk is a zvol on a raid-z= >> there should not really be too much worry for data corruption, I would= >> think. It would be equivalent to using a hardware raid for each compon= ent >> of my zfs pool. >> >> Opinions? Preferably well-reasoned ones. :) >=20 > This is a frustrating situation, because none of the options that I can= > think of look particularly appealing. Single-vdev pools would be the > best option, your redundancy is already taken care of by the host's > pool. The overhead of checksumming, etc. twice is probably not super > bad. However, having the ARC eating up lots of memory twice seems > pretty bletcherous. You can probably do some tuning to reduce that, bu= t > I never liked tuning the ARC much. >=20 > All the nice features ZFS brings to the table is hard to give up once > you get used to having them around, so I understand your quandry. >=20 > Marcus > _______________________________________________ > freebsd-virtualization@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to "freebsd-virtualization-unsubscribe@fr= eebsd.org" >=20 You can just: zfs set primarycache=3Dmetadata poolname And it will only cache metadata in the ARC inside the VM, and avoid caching data blocks, which will be cached outside the VM. You could even turn the primarycache off entirely. --=20 Allan Jude --oaFB5gvVXRR7m9QmbFDpWecE7MhHSsq3P Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQIcBAEBAgAGBQJV3yLjAAoJEBmVNT4SmAt+k/YP/3aQShmax1KoeIpGIXYODdzs 6/b2Gz1R/wFjYWrXIh5RrN3U4h+Ov6hv19MXG+561I/yaqaB9rH6En2hJjOhrI7g iqPDa4sb/LwpeNWRbfl1K00QjCmE8BlcdrACS5qbUO2J+aVItree23LNKrKfZz5a d8Fem4oLihaBVyyTjEqLIBXDoGJdkD5LdHMC2E/iqqN/tRa+5xBa7p7kpf1LWXGV jicFcgxvExgDP1wvmMQmNzje4Gpfc8jDF7MYm/BCzhsODXfMy4ckszkObv5VyljT pnloZ80WXyrOOH5OkooBu8BtDOM915rew43I7HmfinG7kpdPklInCwE//3yos2gH x2B39ohdcek7FRLRl70uUIUzBqGKAYVuWmUrbUcD7oSKICQKboofE1y6bW3xX0z4 IKIk0snhjhc7aUx7EADznYrnzEemxDPrV147sMCQVbI+S1zIvcSuf4YQoq98vIv0 URxCljdd9BkymlwK0zKMo2aA3bI214jmBy0wkJUPtbI070B+meKDTqMo5517gQal 94THEX8rVCHKhDKYwIbRxA+EAXWUFh8FAGxxf2NjpZe/Zs2PPETaJ6WAAYT17vce RtX60kDKkuDrTJ7dktKapmtWHK2hvPiBYkhGgBbf9weBear5UV+IXsX/r9HAXIny rbfP8UQA4wNx3oKFJG1a =V8ux -----END PGP SIGNATURE----- --oaFB5gvVXRR7m9QmbFDpWecE7MhHSsq3P-- From owner-freebsd-virtualization@freebsd.org Thu Aug 27 17:06:59 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 251479C3278 for ; Thu, 27 Aug 2015 17:06:59 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: from mail-yk0-x233.google.com (mail-yk0-x233.google.com [IPv6:2607:f8b0:4002:c07::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D6CEAB61; Thu, 27 Aug 2015 17:06:58 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: by ykbi184 with SMTP id i184so26803048ykb.2; Thu, 27 Aug 2015 10:06:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=s7ThESjQC8/563q/FPvpRjHesIb98raWB1fX7pxit90=; b=Nn+UTWzqofIKaZ166GXvK7jK4dVKfYJnrBkJoCFCEE3BvYvEvuXU+SKPsB7uNtglLE 9j3+mWhvxXRcr78Nq5aYxqkDt17Z0/dB+T1OWvlSvpsboEUr4VD1J2tVe4mbSpvozDG/ i4PCqjae7cEmgkZfJBY4lpTRzQiqTAxjN1xR0U1Zk5f683DOVs29/cU+GxYiSuSDsb/5 xRzL2bRSk+LzYmI+QJLod9HG3rbCtmab7Wh/va79mBYmRpWvcZc+nruoUy0sK9xuQmWi LAn0uMkHUykvOsfFn1EQtm/PrXsWHt7cAFqhxXdlPWnXYE4plFrsgU4SoR13bA4iC1ea 4t3A== MIME-Version: 1.0 X-Received: by 10.170.112.194 with SMTP id e185mr3705606ykb.119.1440695217934; Thu, 27 Aug 2015 10:06:57 -0700 (PDT) Sender: crodr001@gmail.com Received: by 10.37.99.3 with HTTP; Thu, 27 Aug 2015 10:06:57 -0700 (PDT) In-Reply-To: <55DD0876.5070207@freebsd.org> References: <55DD0876.5070207@freebsd.org> Date: Thu, 27 Aug 2015 10:06:57 -0700 X-Google-Sender-Auth: HFiHWZqLlnW7DOSwM_GInrw7d2I Message-ID: Subject: Re: passthru requires guest memory to be wired From: Craig Rodrigues To: Peter Grehan Cc: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 17:06:59 -0000 On Tue, Aug 25, 2015 at 5:29 PM, Peter Grehan wrote: > Hi Craig, > > '-S' needs to be passed to bhyveload *and* bhyve if PCI passthru is used. >>> >>> >>> Does grub-bhyve need this as well? >> > > It does: I need to commit the change for this. > > Do you have a patch for this that I can use? I need this to restore functionality in my bhyve VM environment that I am working with. -- Craig From owner-freebsd-virtualization@freebsd.org Thu Aug 27 17:20:56 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C47EA9C37C6 for ; Thu, 27 Aug 2015 17:20:56 +0000 (UTC) (envelope-from paul@redbarn.org) Received: from family.redbarn.org (family.redbarn.org [IPv6:2001:559:8000:cd::5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id AFF33163C for ; Thu, 27 Aug 2015 17:20:56 +0000 (UTC) (envelope-from paul@redbarn.org) Received: from [IPv6:2001:559:8000:cb:2598:ad18:8548:666e] (unknown [IPv6:2001:559:8000:cb:2598:ad18:8548:666e]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by family.redbarn.org (Postfix) with ESMTPSA id C996918779; Thu, 27 Aug 2015 17:20:55 +0000 (UTC) Message-ID: <55DF46F5.4070406@redbarn.org> Date: Thu, 27 Aug 2015 10:20:53 -0700 From: Paul Vixie User-Agent: Postbox 4.0.3 (Windows/20150805) MIME-Version: 1.0 To: Matt Churchyard CC: Marcus Reid , Vick Khera , "freebsd-virtualization@freebsd.org" Subject: Re: Options for zfs inside a VM backed by zfs on the host References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> In-Reply-To: <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> X-Enigmail-Version: 1.2.3 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 17:20:56 -0000 let me ask a related question: i'm using FFS in the guest, zvol on the host. should i be telling my guest kernel to not bother with an FFS buffer cache at all, or to use a smaller one, or what? From owner-freebsd-virtualization@freebsd.org Thu Aug 27 17:51:38 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C34B09C229E for ; Thu, 27 Aug 2015 17:51:38 +0000 (UTC) (envelope-from vivek@khera.org) Received: from mail-wi0-x22f.google.com (mail-wi0-x22f.google.com [IPv6:2a00:1450:400c:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 582F29BE for ; Thu, 27 Aug 2015 17:51:38 +0000 (UTC) (envelope-from vivek@khera.org) Received: by wicgk12 with SMTP id gk12so16591145wic.1 for ; Thu, 27 Aug 2015 10:51:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=khera.org; s=google11; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=luzyKxd8uLeAKwq868zsbYuAFaS3CB6R5yAtcSvVLcE=; b=UiwJ/fNPYNyLq/OW3HQ4KMuknt363fehQg2GfScucLvjDS3OcmcckZ7Uco5KveoojN sjU1N/1zzjoR+1j398a71/8psdc1sPI6TMYc+HqL7az5jl+jgYgt/p9JYvc26dOa1+B4 9ndPLPx6BotsaDa5g3615sX1MyJM7dhf2CNJM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=luzyKxd8uLeAKwq868zsbYuAFaS3CB6R5yAtcSvVLcE=; b=gAFrXcSuFBjryQAGbPyTtrnv8Js+THWSvGgc8MjZRVLEPRVvSolWz3d+1/T1eC5oxv CDxIIDdnm3jOHb/3DWbxaysS83xUX9k34Mo9Pa01gQenuJ/gsa/ztVxhL33E9bvYDBza 39jOI/c7Dz1TpI4wKQ3GeNVHOJnkd6KKi+WZNZhv76q4lbuDV4gyktTPa6e8MA+LbtEn HEukZnwvNXRhpypqfsaRfXjpHe6KILNO8Eh2MPVdSQQdiLXunVrBr+cyS3aVlxohjMUn pW7IUt69TXCFOMvEG32p/frV5ObGtBZOQb6lCYnhVQOj2v0E+4VZ+YKI3ikv+Qdfgufi roSg== X-Gm-Message-State: ALoCoQknYT+wrHwm4zVpDoGH322yW8icueD2UDe7zuac5YcKz2Rxj4NhK/YwTMfy/mQR4ZB2EDPn MIME-Version: 1.0 X-Received: by 10.180.23.33 with SMTP id j1mr11713213wif.44.1440697896531; Thu, 27 Aug 2015 10:51:36 -0700 (PDT) Received: by 10.28.188.132 with HTTP; Thu, 27 Aug 2015 10:51:36 -0700 (PDT) In-Reply-To: References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> Date: Thu, 27 Aug 2015 13:51:36 -0400 Message-ID: Subject: Re: Options for zfs inside a VM backed by zfs on the host From: Vick Khera To: Marie Cc: Matt Churchyard , Marcus Reid , "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 17:51:39 -0000 On Thu, Aug 27, 2015 at 6:10 AM, Marie wrote: > I've tried this in the past, and found the worst performance penalty was > with ARC disabled in guest. I tried with ARC enabled on host and guest, > only on host, only on guest. There was a significant performance penalty > with either ARC disabled. > > I'd still recommend to experiment with it on your own to see if the hit i= s > acceptable or not. > Thanks for all the replies. I'm going with a small-ish ARC on the VMs (about =C2=BC the allocated RAM as max, and very small amount for min) and letting the host have its substantial ARC. Since I'm running with compression=3Dlz4 on the guest, I ended up setting compression=3Dnone on the host for the backing volumes. After some testing = I found I was getting no compression on the backing volumes, so why waste the CPU overhead trying. From owner-freebsd-virtualization@freebsd.org Thu Aug 27 19:53:46 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AEDE59C4BA7; Thu, 27 Aug 2015 19:53:46 +0000 (UTC) (envelope-from milios@ccsys.com) Received: from cargobay.net (cargobay.net [198.178.123.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8B5E81FB8; Thu, 27 Aug 2015 19:53:46 +0000 (UTC) (envelope-from milios@ccsys.com) Received: from [192.168.0.2] (cblmdm72-240-160-19.buckeyecom.net [72.240.160.19]) by cargobay.net (Postfix) with ESMTPSA id 0D69AD31; Thu, 27 Aug 2015 19:49:59 +0000 (UTC) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: Options for zfs inside a VM backed by zfs on the host From: "Chad J. Milios" In-Reply-To: <55DF46F5.4070406@redbarn.org> Date: Thu, 27 Aug 2015 15:53:42 -0400 Cc: Matt Churchyard , Vick Khera , allanjude@freebsd.org, "freebsd-virtualization@freebsd.org" , freebsd-fs@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <453A5A6F-E347-41AE-8CBC-9E0F4DA49D38@ccsys.com> References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> <55DF46F5.4070406@redbarn.org> To: Paul Vixie X-Mailer: Apple Mail (2.2104) X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 19:53:46 -0000 > On Aug 27, 2015, at 10:46 AM, Allan Jude = wrote: >=20 > On 2015-08-27 02:10, Marcus Reid wrote: >> On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: >>> I'm running FreeBSD inside a VM that is providing the virtual disks = backed >>> by several ZFS zvols on the host. I want to run ZFS on the VM itself = too >>> for simplified management and backup purposes. >>>=20 >>> The question I have is on the VM guest, do I really need to run a = raid-z or >>> mirror or can I just use a single virtual disk (or even a stripe)? = Given >>> that the underlying storage for the virtual disk is a zvol on a = raid-z >>> there should not really be too much worry for data corruption, I = would >>> think. It would be equivalent to using a hardware raid for each = component >>> of my zfs pool. >>>=20 >>> Opinions? Preferably well-reasoned ones. :) >>=20 >> This is a frustrating situation, because none of the options that I = can >> think of look particularly appealing. Single-vdev pools would be the >> best option, your redundancy is already taken care of by the host's >> pool. The overhead of checksumming, etc. twice is probably not super >> bad. However, having the ARC eating up lots of memory twice seems >> pretty bletcherous. You can probably do some tuning to reduce that, = but >> I never liked tuning the ARC much. >>=20 >> All the nice features ZFS brings to the table is hard to give up once >> you get used to having them around, so I understand your quandry. >>=20 >> Marcus >=20 > You can just: >=20 > zfs set primarycache=3Dmetadata poolname >=20 > And it will only cache metadata in the ARC inside the VM, and avoid > caching data blocks, which will be cached outside the VM. You could = even > turn the primarycache off entirely. >=20 > --=20 > Allan Jude > On Aug 27, 2015, at 1:20 PM, Paul Vixie wrote: >=20 > let me ask a related question: i'm using FFS in the guest, zvol on the > host. should i be telling my guest kernel to not bother with an FFS > buffer cache at all, or to use a smaller one, or what? Whether we are talking ffs, ntfs or zpool atop zvol, unfortunately there = are really no simple answers. You must consider your use case, the host = and vm hardware/software configuration, perform meaningful benchmarks = and, if you care about data integrity, thorough tests of the likely = failure modes (all far more easily said than done). I=E2=80=99m curious = to hear more about your use case(s) and setups so as to offer better = insight on what alternatives may make more/less sense for you. = Performance needs? Are you striving for lower individual latency or = higher combined throughput? How critical are integrity and availability? = How do you prefer your backup routine? Do you handle that in guest or = host? Want features like dedup and/or L2ARC up in the mix? (Then = everything bears reconsideration, just about triple your research and = testing efforts.) Sorry, I=E2=80=99m really not trying to scare anyone away from ZFS. It = is awesome and capable of providing amazing solutions with very reliable = and sensible behavior if handled with due respect, fear, monitoring and = upkeep. :) There are cases to be made for caching [meta-]data in the child, in the = parent, checksumming in the child/parent/both, compressing in the = child/parent. I believe `gstat` along with your custom-made benchmark or = test load will greatly help guide you. ZFS on ZFS seems to be a hardly studied, seldom reported, never = documented, tedious exercise. Prepare for accelerated greying and = balding of your hair. The parent's volblocksize, child's ashift, = alignment, interactions involving raidz stripes (if used) can lead to = problems from slightly decreased performance and storage efficiency to = pathological write amplification within ZFS, performance and = responsiveness crashing and sinking to the bottom of the ocean. Some = datasets can become veritable black holes to vfs system calls. You may = see ZFS reporting elusive errors, deadlocking or panicing in the child = or parent altogether. With diligence though, stable and performant = setups can be discovered for many production situations. For example, for a zpool (whether used by a VM or not, locally, thru = iscsi, ggate[cd], or whatever) atop zvol which sits on parent zpool with = no redundancy, I would set primarycache=3Dmetadata checksum=3Doff = compression=3Doff for the zvol(s) on the host(s) and for the most part = just use the same zpool settings and sysctl tunings in the VM (or child = zpool, whatever role it may conduct) that i would otherwise use on bare = cpu and bare drives (defaults + compression=3Dlz4 atime=3Doff). However, = that simple case is likely not yours. With ufs/ffs/ntfs/ext4 and most other filesystems atop a zvol i use = checksums on the parent zvol, and compression too if the child doesn=E2=80= =99t support it (as ntfs can), but still caching only metadata on the = host and letting the child vm/fs cache real data. My use case involves charging customers for their memory use so = admittedly that is one motivating factor, LOL. Plus, i certainly don=E2=80= =99t want one rude VM marching through host ARC unfairly evacuating and = starving the other polite neighbors. VM=E2=80=99s swap space becomes another consideration and I treat it = like any other =E2=80=98dumb=E2=80=99 filesystem with compression and = checksumming done by the parent but recent versions of many operating = systems may be paging out only already compressed data, so investigate = your guest OS. I=E2=80=99ve found lz4=E2=80=99s claims of an = almost-no-penalty early-abort to be vastly overstated when dealing with = zvols, small block sizes and high throughput so if you can be certain = you=E2=80=99ll be dealing with only compressed data then turn it off. = For the virtual memory pagers in most current-day OS=E2=80=99s though = set compression on the swap=E2=80=99s backing zvol to lz4. Another factor is the ZIL. One VM can hoard your synchronous write = performance. Solutions are beyond the scope of this already-too-long = email :) but I=E2=80=99d be happy to elaborate if queried. And then there=E2=80=99s always netbooting guests from NFS mounts served = by the host and giving the guest no virtual disks, don=E2=80=99t forget = to consider that option. Hope this provokes some fruitful ideas for you. Glad to philosophize = about ZFS setups with ya=E2=80=99ll :) -chad= From owner-freebsd-virtualization@freebsd.org Thu Aug 27 23:47:26 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4B15E9C3C7F; Thu, 27 Aug 2015 23:47:26 +0000 (UTC) (envelope-from tenzin.lhakhang@gmail.com) Received: from mail-lb0-x230.google.com (mail-lb0-x230.google.com [IPv6:2a00:1450:4010:c04::230]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id B1B7095A; Thu, 27 Aug 2015 23:47:25 +0000 (UTC) (envelope-from tenzin.lhakhang@gmail.com) Received: by lbbtg9 with SMTP id tg9so20949419lbb.1; Thu, 27 Aug 2015 16:47:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=aJtSgo6ud7CKF2drcW7Uf7J85ELv4p/hDDkXHJEN4Vw=; b=v24aQ9+lheQSC9r1U8S0DgGaMcWVSF5sbuwqRGhN0gNgrXXBTFZXLXZe9m6/78GX1l 3wWHD+JjIcV4YhhLa67Wz2KQleoSF9iQH7sysNuABezzlX//D6qvZfctEYScKAOTKSBY RD4pyesOIcyn75B83RiQ4CigMvBymv4fF+Ox4e/Z8T4R09YMsHZzpME6SU3EAPEzdonM 81qJP/s1kil0TPaPuA2Zfu02li+kj47vf1hcoFgiTiLJKzr9LtGSLKMapQ0vqYa89A6D ecD89X/u8wuGIo1IBcJRDSHa2vKD1ZK+Soo5NMiD1bt2ugFnBbwPZIOerAnjDTFkP8lk qX9g== MIME-Version: 1.0 X-Received: by 10.112.204.162 with SMTP id kz2mr3414817lbc.115.1440719242475; Thu, 27 Aug 2015 16:47:22 -0700 (PDT) Received: by 10.25.127.9 with HTTP; Thu, 27 Aug 2015 16:47:22 -0700 (PDT) In-Reply-To: <453A5A6F-E347-41AE-8CBC-9E0F4DA49D38@ccsys.com> References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> <55DF46F5.4070406@redbarn.org> <453A5A6F-E347-41AE-8CBC-9E0F4DA49D38@ccsys.com> Date: Thu, 27 Aug 2015 19:47:22 -0400 Message-ID: Subject: Re: Options for zfs inside a VM backed by zfs on the host From: Tenzin Lhakhang To: "Chad J. Milios" Cc: Paul Vixie , freebsd-fs@freebsd.org, Vick Khera , Matt Churchyard , "freebsd-virtualization@freebsd.org" , allanjude@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Aug 2015 23:47:26 -0000 That was a really awesome read! The idea of turning metadata on at the backend zpool and then data on the VM was interesting, I will give that a try. Please can you elaborate more on the ZILs and synchronous writes by VMs.. that seems like a great topic. - I am right now exploring the question: are SSD ZILs necessary in an all SSD pool? and then the question of NVMe SSD ZILs onto of an all SSD pool. My guess at the moment is that SSD ZILs are not necessary at all in an SSD pool during intensive IO. I've been told that ZILs are always there to help you, but when your pool aggregate IOPs is greater than the a ZIL, it doesn't seem to make sense.. Or is it the latency of writing to a single disk vs striping across your "fast" vdevs? Thanks, Tenzin On Thu, Aug 27, 2015 at 3:53 PM, Chad J. Milios wrote: > > On Aug 27, 2015, at 10:46 AM, Allan Jude wrote: > > > > On 2015-08-27 02:10, Marcus Reid wrote: > >> On Wed, Aug 26, 2015 at 05:25:52PM -0400, Vick Khera wrote: > >>> I'm running FreeBSD inside a VM that is providing the virtual disks > backed > >>> by several ZFS zvols on the host. I want to run ZFS on the VM itself > too > >>> for simplified management and backup purposes. > >>> > >>> The question I have is on the VM guest, do I really need to run a > raid-z or > >>> mirror or can I just use a single virtual disk (or even a stripe)? > Given > >>> that the underlying storage for the virtual disk is a zvol on a raid-= z > >>> there should not really be too much worry for data corruption, I woul= d > >>> think. It would be equivalent to using a hardware raid for each > component > >>> of my zfs pool. > >>> > >>> Opinions? Preferably well-reasoned ones. :) > >> > >> This is a frustrating situation, because none of the options that I ca= n > >> think of look particularly appealing. Single-vdev pools would be the > >> best option, your redundancy is already taken care of by the host's > >> pool. The overhead of checksumming, etc. twice is probably not super > >> bad. However, having the ARC eating up lots of memory twice seems > >> pretty bletcherous. You can probably do some tuning to reduce that, b= ut > >> I never liked tuning the ARC much. > >> > >> All the nice features ZFS brings to the table is hard to give up once > >> you get used to having them around, so I understand your quandry. > >> > >> Marcus > > > > You can just: > > > > zfs set primarycache=3Dmetadata poolname > > > > And it will only cache metadata in the ARC inside the VM, and avoid > > caching data blocks, which will be cached outside the VM. You could eve= n > > turn the primarycache off entirely. > > > > -- > > Allan Jude > > > On Aug 27, 2015, at 1:20 PM, Paul Vixie wrote: > > > > let me ask a related question: i'm using FFS in the guest, zvol on the > > host. should i be telling my guest kernel to not bother with an FFS > > buffer cache at all, or to use a smaller one, or what? > > > Whether we are talking ffs, ntfs or zpool atop zvol, unfortunately there > are really no simple answers. You must consider your use case, the host a= nd > vm hardware/software configuration, perform meaningful benchmarks and, if > you care about data integrity, thorough tests of the likely failure modes > (all far more easily said than done). I=E2=80=99m curious to hear more ab= out your > use case(s) and setups so as to offer better insight on what alternatives > may make more/less sense for you. Performance needs? Are you striving for > lower individual latency or higher combined throughput? How critical are > integrity and availability? How do you prefer your backup routine? Do you > handle that in guest or host? Want features like dedup and/or L2ARC up in > the mix? (Then everything bears reconsideration, just about triple your > research and testing efforts.) > > Sorry, I=E2=80=99m really not trying to scare anyone away from ZFS. It is= awesome > and capable of providing amazing solutions with very reliable and sensibl= e > behavior if handled with due respect, fear, monitoring and upkeep. :) > > There are cases to be made for caching [meta-]data in the child, in the > parent, checksumming in the child/parent/both, compressing in the > child/parent. I believe `gstat` along with your custom-made benchmark or > test load will greatly help guide you. > > ZFS on ZFS seems to be a hardly studied, seldom reported, never > documented, tedious exercise. Prepare for accelerated greying and balding > of your hair. The parent's volblocksize, child's ashift, alignment, > interactions involving raidz stripes (if used) can lead to problems from > slightly decreased performance and storage efficiency to pathological wri= te > amplification within ZFS, performance and responsiveness crashing and > sinking to the bottom of the ocean. Some datasets can become veritable > black holes to vfs system calls. You may see ZFS reporting elusive errors= , > deadlocking or panicing in the child or parent altogether. With diligence > though, stable and performant setups can be discovered for many productio= n > situations. > > For example, for a zpool (whether used by a VM or not, locally, thru > iscsi, ggate[cd], or whatever) atop zvol which sits on parent zpool with = no > redundancy, I would set primarycache=3Dmetadata checksum=3Doff compressio= n=3Doff > for the zvol(s) on the host(s) and for the most part just use the same > zpool settings and sysctl tunings in the VM (or child zpool, whatever rol= e > it may conduct) that i would otherwise use on bare cpu and bare drives > (defaults + compression=3Dlz4 atime=3Doff). However, that simple case is = likely > not yours. > > With ufs/ffs/ntfs/ext4 and most other filesystems atop a zvol i use > checksums on the parent zvol, and compression too if the child doesn=E2= =80=99t > support it (as ntfs can), but still caching only metadata on the host and > letting the child vm/fs cache real data. > > My use case involves charging customers for their memory use so admittedl= y > that is one motivating factor, LOL. Plus, i certainly don=E2=80=99t want = one rude > VM marching through host ARC unfairly evacuating and starving the other > polite neighbors. > > VM=E2=80=99s swap space becomes another consideration and I treat it like= any > other =E2=80=98dumb=E2=80=99 filesystem with compression and checksumming= done by the > parent but recent versions of many operating systems may be paging out on= ly > already compressed data, so investigate your guest OS. I=E2=80=99ve found= lz4=E2=80=99s > claims of an almost-no-penalty early-abort to be vastly overstated when > dealing with zvols, small block sizes and high throughput so if you can b= e > certain you=E2=80=99ll be dealing with only compressed data then turn it = off. For > the virtual memory pagers in most current-day OS=E2=80=99s though set com= pression > on the swap=E2=80=99s backing zvol to lz4. > > Another factor is the ZIL. One VM can hoard your synchronous write > performance. Solutions are beyond the scope of this already-too-long emai= l > :) but I=E2=80=99d be happy to elaborate if queried. > > And then there=E2=80=99s always netbooting guests from NFS mounts served = by the > host and giving the guest no virtual disks, don=E2=80=99t forget to consi= der that > option. > > Hope this provokes some fruitful ideas for you. Glad to philosophize abou= t > ZFS setups with ya=E2=80=99ll :) > > -chad > _______________________________________________ > freebsd-fs@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" From owner-freebsd-virtualization@freebsd.org Fri Aug 28 03:28:15 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 365619C4F22 for ; Fri, 28 Aug 2015 03:28:15 +0000 (UTC) (envelope-from grehan@freebsd.org) Received: from iredmail.onthenet.com.au (iredmail.onthenet.com.au [203.13.68.150]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EABAD189 for ; Fri, 28 Aug 2015 03:28:14 +0000 (UTC) (envelope-from grehan@freebsd.org) Received: from localhost (iredmail.onthenet.com.au [127.0.0.1]) by iredmail.onthenet.com.au (Postfix) with ESMTP id 95073281575 for ; Fri, 28 Aug 2015 13:28:05 +1000 (AEST) X-Amavis-Modified: Mail body modified (using disclaimer) - iredmail.onthenet.com.au Received: from iredmail.onthenet.com.au ([127.0.0.1]) by localhost (iredmail.onthenet.com.au [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9LFCuZjRFcmu for ; Fri, 28 Aug 2015 13:28:05 +1000 (AEST) Received: from Peters-MacBook-Pro.local (unknown [64.245.0.210]) by iredmail.onthenet.com.au (Postfix) with ESMTPSA id 618EF280F8B; Fri, 28 Aug 2015 13:28:02 +1000 (AEST) Subject: Re: passthru requires guest memory to be wired To: Craig Rodrigues References: <55DD0876.5070207@freebsd.org> Cc: "freebsd-virtualization@freebsd.org" From: Peter Grehan Message-ID: <55DFD53F.2080608@freebsd.org> Date: Thu, 27 Aug 2015 20:27:59 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Aug 2015 03:28:15 -0000 Hi Craig, > Do you have a patch for this that I can use? I need this to restore > functionality in my bhyve VM environment that I am working with. Can you give this a try ? http://people.freebsd.org/~grehan/grub-bhyve-S.diff later, Peter. From owner-freebsd-virtualization@freebsd.org Fri Aug 28 16:27:32 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3507D9C581A; Fri, 28 Aug 2015 16:27:32 +0000 (UTC) (envelope-from milios@ccsys.com) Received: from cargobay.net (cargobay.net [198.178.123.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F21191A9A; Fri, 28 Aug 2015 16:27:31 +0000 (UTC) (envelope-from milios@ccsys.com) Received: from [192.168.0.2] (cblmdm72-240-160-19.buckeyecom.net [72.240.160.19]) by cargobay.net (Postfix) with ESMTPSA id B594ADE8; Fri, 28 Aug 2015 16:23:36 +0000 (UTC) Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: Options for zfs inside a VM backed by zfs on the host From: "Chad J. Milios" In-Reply-To: Date: Fri, 28 Aug 2015 12:27:22 -0400 Cc: freebsd-fs@freebsd.org, "freebsd-virtualization@freebsd.org" Message-Id: <8DB91B3A-44DC-4650-9E90-56F7DE2ABC42@ccsys.com> References: <20150827061044.GA10221@blazingdot.com> <20150827062015.GA10272@blazingdot.com> <1a6745e27d184bb99eca7fdbdc90c8b5@SERVER.ad.usd-group.com> <55DF46F5.4070406@redbarn.org> <453A5A6F-E347-41AE-8CBC-9E0F4DA49D38@ccsys.com> To: Tenzin Lhakhang X-Mailer: Apple Mail (2.2104) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Aug 2015 16:27:32 -0000 > On Aug 27, 2015, at 7:47 PM, Tenzin Lhakhang = wrote: >=20 > On Thu, Aug 27, 2015 at 3:53 PM, Chad J. Milios > wrote: >=20 > Whether we are talking ffs, ntfs or zpool atop zvol, unfortunately = there are really no simple answers. You must consider your use case, the = host and vm hardware/software configuration, perform meaningful = benchmarks and, if you care about data integrity, thorough tests of the = likely failure modes (all far more easily said than done). I=E2=80=99m = curious to hear more about your use case(s) and setups so as to offer = better insight on what alternatives may make more/less sense for you. = Performance needs? Are you striving for lower individual latency or = higher combined throughput? How critical are integrity and availability? = How do you prefer your backup routine? Do you handle that in guest or = host? Want features like dedup and/or L2ARC up in the mix? (Then = everything bears reconsideration, just about triple your research and = testing efforts.) >=20 > Sorry, I=E2=80=99m really not trying to scare anyone away from ZFS. It = is awesome and capable of providing amazing solutions with very reliable = and sensible behavior if handled with due respect, fear, monitoring and = upkeep. :) >=20 > There are cases to be made for caching [meta-]data in the child, in = the parent, checksumming in the child/parent/both, compressing in the = child/parent. I believe `gstat` along with your custom-made benchmark or = test load will greatly help guide you. >=20 > ZFS on ZFS seems to be a hardly studied, seldom reported, never = documented, tedious exercise. Prepare for accelerated greying and = balding of your hair. The parent's volblocksize, child's ashift, = alignment, interactions involving raidz stripes (if used) can lead to = problems from slightly decreased performance and storage efficiency to = pathological write amplification within ZFS, performance and = responsiveness crashing and sinking to the bottom of the ocean. Some = datasets can become veritable black holes to vfs system calls. You may = see ZFS reporting elusive errors, deadlocking or panicing in the child = or parent altogether. With diligence though, stable and performant = setups can be discovered for many production situations. >=20 > For example, for a zpool (whether used by a VM or not, locally, thru = iscsi, ggate[cd], or whatever) atop zvol which sits on parent zpool with = no redundancy, I would set primarycache=3Dmetadata checksum=3Doff = compression=3Doff for the zvol(s) on the host(s) and for the most part = just use the same zpool settings and sysctl tunings in the VM (or child = zpool, whatever role it may conduct) that i would otherwise use on bare = cpu and bare drives (defaults + compression=3Dlz4 atime=3Doff). However, = that simple case is likely not yours. >=20 > With ufs/ffs/ntfs/ext4 and most other filesystems atop a zvol i use = checksums on the parent zvol, and compression too if the child doesn=E2=80= =99t support it (as ntfs can), but still caching only metadata on the = host and letting the child vm/fs cache real data. >=20 > My use case involves charging customers for their memory use so = admittedly that is one motivating factor, LOL. Plus, i certainly don=E2=80= =99t want one rude VM marching through host ARC unfairly evacuating and = starving the other polite neighbors. >=20 > VM=E2=80=99s swap space becomes another consideration and I treat it = like any other =E2=80=98dumb=E2=80=99 filesystem with compression and = checksumming done by the parent but recent versions of many operating = systems may be paging out only already compressed data, so investigate = your guest OS. I=E2=80=99ve found lz4=E2=80=99s claims of an = almost-no-penalty early-abort to be vastly overstated when dealing with = zvols, small block sizes and high throughput so if you can be certain = you=E2=80=99ll be dealing with only compressed data then turn it off. = For the virtual memory pagers in most current-day OS=E2=80=99s though = set compression on the swap=E2=80=99s backing zvol to lz4. >=20 > Another factor is the ZIL. One VM can hoard your synchronous write = performance. Solutions are beyond the scope of this already-too-long = email :) but I=E2=80=99d be happy to elaborate if queried. >=20 > And then there=E2=80=99s always netbooting guests from NFS mounts = served by the host and giving the guest no virtual disks, don=E2=80=99t = forget to consider that option. >=20 > Hope this provokes some fruitful ideas for you. Glad to philosophize = about ZFS setups with ya=E2=80=99ll :) >=20 > -chad > That was a really awesome read! The idea of turning metadata on at = the backend zpool and then data on the VM was interesting, I will give = that a try. Please can you elaborate more on the ZILs and synchronous = writes by VMs.. that seems like a great topic. > I am right now exploring the question: are SSD ZILs necessary in an = all SSD pool? and then the question of NVMe SSD ZILs onto of an all SSD = pool. My guess at the moment is that SSD ZILs are not necessary at all = in an SSD pool during intensive IO. I've been told that ZILs are always = there to help you, but when your pool aggregate IOPs is greater than the = a ZIL, it doesn't seem to make sense.. Or is it the latency of writing = to a single disk vs striping across your "fast" vdevs? >=20 > Thanks, > Tenzin Well the ZIL (ZFS Intent Log) is basically an absolute necessity. = Without it, a call to fsync() could take over 10 seconds on a system = serving a relatively light load. HOWEVER, a source of confusion is the = terminology people often throw around. See, the ZIL is basically a = concept, a method, a procedure. It is not a device. A 'SLOG' is what = most people mean when they say ZIL. That is a Seperate Log device. (ZFS = =E2=80=98log=E2=80=99 vdev type; documented in man 8 zpool.) When you = aren=E2=80=99t using a SLOG device, your ZIL is transparently allocated = by ZFS, roughly a little chunk of space reserved near the =E2=80=9Cmiddle=E2= =80=9D (at least ZFS attempts to locate it there physically but on SSDs = or SMR HDs there=E2=80=99s no way to and no point to) of the main pool = (unless you=E2=80=99ve gone out of your way to deliberately disable the = ZIL entirely). The other confusion often surrounding the ZIL is when it gets used. Most = writes (in the world) would bypass the ZIL (built-in or SLOG) entirely = anyway because they are asynchronous writes, not synchronous ones. Only = the latter are candidates to clog a ZIL bottleneck. You will need to = consider your workload specifically to know whether a SLOG will help, = and if so, how much SLOG performance is required to not put a damper on = the pool=E2=80=99s overall throughput capability. Conversely you want to = know how much SLOG performance is overkill because NVMe and SLC SSDs are = freaking expensive. Now for many on the list this is going to be some elementary information = so i apologize but i come across this question all the time, sync vs = async writes. i=E2=80=99m sure there are many who might find this = informative and with ZFS the difference becomes more profound and = important than most other filesystems. See, ZFS always is always bundling up batches of writes into transaction = groups (TXGs). Without extraneous detail it can be understood that = basically these happen every 5 seconds (sysctl vfs.zfs.txg.timeout). So = picture ZFS typically has two TXGs it=E2=80=99s worried about at any = given time, one is being filled into memory while the previous one is = being flushed out to physical disk. So when you write something asynchronously the operating system is going = to say =E2=80=98aye aye captain=E2=80=99 and send you along your merry = way very quickly but if you lose power or crash and then reboot, ZFS = only guarantees you a CONSISTENT state, not your most recent state. Your = pool may come back online and you=E2=80=99ve lost 5-15 seconds worth of = work. For your typical desktop or workstation workload that=E2=80=99s = probably no big deal. You lost 15 seconds of effort, you repeat it, and = continue about your business. However, imagine a mail server that received many many emails in just = that short time and has told all the senders of all those messages = =E2=80=9Cgot it, thumbs up=E2=80=9D. You cannot redact those assurances = you handed out. You have no idea who to contact to ask to repeat = themselves. Even if you did it's likely the sending mail servers have = long since forgotten about those particular messages. So, with each = message you receive, after you tell the operating system to write the = data you issue a call to fsync(new_message) and only after that call = returns do you give the sender the thumbs up to forget the message and = leave it in your capable hands to deliver it to its destination. Thanks = to the ZIL, fsync() will typically return in miliseconds or less instead = of the many seconds it could take for that write in a bundled TXG to end = up physically saved. In an ideal world, the ZIL gets written to and = never read again, data just becoming stale and overwritten. (The data = stays in the in-memory TXG so it=E2=80=99s redundant in the ZIL once = that TXG completes flushing). The email server is the typical example of the use of fsync but there = are thousands of others. Typically applications using central databases = are written in a simplistic way to assume the database is trustworthy = and fsync is how the database attempts to fulfill that requirement. To complicate matters, consider VMs, particularly uncooperative, = impolite, selfish VMs. Synchronous write iops are a particularly scarce = and expensive resource which hasn=E2=80=99t been increasing as quickly = and cheaply as, say, io bandwidth, cpu speeds, memory capacities. To = make it worse the numbers for iops most SSD makers advertise on their = so-called spec sheets are untrustworthy, they have no standard benchmark = or enforcement (=E2=80=9CThe PS in IOPS stands for Per Second so we ran = our benchmark on a fresh drive for one second and got 100,000 IOPS" = Well, good for you, that is useless to me. Tell me what you can sustain = all day long a year down the road.) and they=E2=80=99re seldom = accountable to anybody not buying 10,000 units. All this consolidation = of VMs/containers/jails can really stress sync i/o capability of even = the biggest baddest servers. And FreeBSD, in all it=E2=80=99s glory is not yet very well suited to = the problem of multi-tennency. (It=E2=80=99s great if all jails and VMs = on a server are owned and controlled by one stakeholder who can = coordinate their friendly coexistence.) My firm develops and supports a = proprietary shim into ZFS and jails for enforcing the polite sharing of = bandwidth, total iops and sync iops, that can be applied to groups of = which the granularity of membership are arbitrary ZFS datasets. So = there, that's my shameless plug, LOL. However there are brighter minds = than I working on this problem and I=E2=80=99m hoping to maybe some time = either participate in a more general development of such facilities with = broader application into mainline FreeBSD or to perhaps open source my = own work eventually. (I guess I=E2=80=99m being more shy than selfish = with it, LOL.) Hope that=E2=80=99s food for thought for some of you -chad= From owner-freebsd-virtualization@freebsd.org Sat Aug 29 23:24:28 2015 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0298E9C60FF for ; Sat, 29 Aug 2015 23:24:28 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: from mail-yk0-x22e.google.com (mail-yk0-x22e.google.com [IPv6:2607:f8b0:4002:c07::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C5E88304; Sat, 29 Aug 2015 23:24:27 +0000 (UTC) (envelope-from crodr001@gmail.com) Received: by ykdz80 with SMTP id z80so47540611ykd.0; Sat, 29 Aug 2015 16:24:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=PfRIqaoqNgOw1QpDmC6uzR1UzlUs/RPDMynjGhG+yac=; b=BINY8iccuFJltxKKOOJIekTTpEnDgC0BQKyBmzCXnqnqyW45WqWyAN4rEw+n9DDZ7Q FLMoS+yIex2EzfBdO9iOvclsJSzanMpiLK9IUSItTZ9m5C+vveB7MiZCV1r8mk69zwt3 BRHBgKJpuSafCM/ztyPuhGwbHggQMI1oImxLZWUBa01Cf4pecm0cJtL2K0cwqTTEZtcC 4o/8RjlNgRdeimyRbceeJqJkmEdCDETL0u54LDWIrRZpIlU5KmJPElwv2nR/8ert1udn rZ94hPVfeu6ubf5TB5kHyXsdKZyZ2GGffvjBb+ZWEYWUymOdvmFGruN8cPXW4qadka1Q Ld+Q== MIME-Version: 1.0 X-Received: by 10.129.76.74 with SMTP id z71mr14002462ywa.93.1440890666550; Sat, 29 Aug 2015 16:24:26 -0700 (PDT) Sender: crodr001@gmail.com Received: by 10.37.99.3 with HTTP; Sat, 29 Aug 2015 16:24:26 -0700 (PDT) In-Reply-To: <55DFD53F.2080608@freebsd.org> References: <55DD0876.5070207@freebsd.org> <55DFD53F.2080608@freebsd.org> Date: Sat, 29 Aug 2015 16:24:26 -0700 X-Google-Sender-Auth: BtaIdS5ZGIDUMzUiPcfjg2LqzqE Message-ID: Subject: Re: passthru requires guest memory to be wired From: Craig Rodrigues To: Peter Grehan Cc: "freebsd-virtualization@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Aug 2015 23:24:28 -0000 On Thu, Aug 27, 2015 at 8:27 PM, Peter Grehan wrote: > http://people.freebsd.org/~grehan/grub-bhyve-S.diff > I confirmed, that patch works. Thanks! -- Craig