Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 20 Jun 2014 17:15:43 +0200
From:      =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        virtualization@FreeBSD.org, "freebsd-xen@freebsd.org" <freebsd-xen@freebsd.org>, bryanv@FreeBSD.org
Subject:   Re: FreeBSD and memory balloon drivers
Message-ID:  <53A4501F.4020201@citrix.com>
In-Reply-To: <20140620132816.GH3991@kib.kiev.ua>
References:  <53A40079.9000804@citrix.com> <20140620132816.GH3991@kib.kiev.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
--------------080308010102010404060706
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 20/06/14 15:28, Konstantin Belousov wrote:
> On Fri, Jun 20, 2014 at 11:35:53AM +0200, Roger Pau Monn? wrote:
>> Hello,
>> 
>> I've been looking into the Xen balloon driver, because I've
>> experienced problems when ballooning memory down which AFAICT are
>> also present in the VirtIO balloon driver. The problem I've
>> experienced is that when ballooning memory down, we basically
>> allocate a bunch of memory as WIRED, to make sure nobody tries to
>> swap it do disk, since it will crash the kernel because the
>> memory is not populated. Due to this massive amount of memory
>> allocated as WIRED, user-space programs that try to use mlock
>> will fail because we hit the limit in vm.max_wired.
>> 
>> I'm not sure what's the best way to deal with this limitation,
>> should vm.max_wired be changed from the balloon drivers when
>> ballooning down/up? Is there anyway to remove the pages ballooned
>> down from the memory accounting of wired pages?
> 
> You could change the type of pages the ballon driver is
> allocating. Instead of wired pages, you may request unmanaged, by
> passing NULL object to vm_page_alloc().  This would also save on
> the trie nodes for managing the radix trie for the object.  There
> are still plinks or listq to keep track of the allocated pages.

Thanks for the info, I have the following patch which fixes the usage
of WIRED for both the Xen and the VirtIO balloon drivers, could
someone please test the VirtIO side?

Roger.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (Darwin)

iQEcBAEBAgAGBQJTpFAfAAoJEKXZdqUyumTA1TQH/22YpAGCQ8oa0hmpfE8oovxz
q8EDRyfDoogEswNYwboI8cBP7GSbuBbe1Z0MTiMHtwyHzqGhJM5B7jKioqFsqxvc
/Qfld8z3vDD94/5iaMX64dV2/VKkLwypR2uU5PkN018FTAJ0FFycC336xVjD8eUz
/DCRQIZRUzNlcrZYlOtSALR2M9bM1/f2++e2C6L7kbSsF4BAH2wmRcdtM1uBMO6/
CnD7ctZsnxxdS05eLWMpv6jfcRH8yDM3vPaHgXa223q74TU1Rh7AFw/TPsyBvBY2
cDUnYLZdUx0NQJfPM9MKGLe8P5o3WLVO1hfGmrZHdmjE2B18mNfzg4SS0CUQhQ0=
=hctw
-----END PGP SIGNATURE-----

--------------080308010102010404060706
Content-Type: text/plain; charset="UTF-8"; x-mac-type=0; x-mac-creator=0;
	name="0001-xen-virtio-fix-balloon-drivers-to-not-mark-pages-as-.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename*0="0001-xen-virtio-fix-balloon-drivers-to-not-mark-pages-as-.pa";
	filename*1="tch"

>From 2ed0f82b16753c96e96acc1e26a75a0fd2ee7d34 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Fri, 20 Jun 2014 16:34:31 +0200
Subject: [PATCH] xen/virtio: fix balloon drivers to not mark pages as WIRED

Prevent the Xen and VirtIO balloon drivers from marking pages as
wired. This prevents them from increasing the system wired page count,
which can lead to mlock failing because of hitting the limit in
vm.max_wired.

Also, in the Xen case make sure pages are zeroed before giving them
back to the hypervisor, or else we might be leaking data.

Sponsored by: Citrix Systems R&D
Reviewed by: xxx
Approved by: xxx

dev/virtio/balloon/virtio_balloon.c:
 - Don't allocate pages with VM_ALLOC_WIRED.

dev/xen/balloon/balloon.c:
 - Don't allocate pages with VM_ALLOC_WIRED.
 - Make sure pages are zeroed before giving them back to the
   hypervisor.
---
 sys/dev/virtio/balloon/virtio_balloon.c |    4 +---
 sys/dev/xen/balloon/balloon.c           |   13 ++++++++++---
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/sys/dev/virtio/balloon/virtio_balloon.c b/sys/dev/virtio/balloon/virtio_balloon.c
index d540099..6d00ef3 100644
--- a/sys/dev/virtio/balloon/virtio_balloon.c
+++ b/sys/dev/virtio/balloon/virtio_balloon.c
@@ -438,8 +438,7 @@ vtballoon_alloc_page(struct vtballoon_softc *sc)
 {
 	vm_page_t m;
 
-	m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_WIRED |
-	    VM_ALLOC_NOOBJ);
+	m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ);
 	if (m != NULL)
 		sc->vtballoon_current_npages++;
 
@@ -450,7 +449,6 @@ static void
 vtballoon_free_page(struct vtballoon_softc *sc, vm_page_t m)
 {
 
-	vm_page_unwire(m, PQ_INACTIVE);
 	vm_page_free(m);
 	sc->vtballoon_current_npages--;
 }
diff --git a/sys/dev/xen/balloon/balloon.c b/sys/dev/xen/balloon/balloon.c
index fa56c86..a7ca1e4 100644
--- a/sys/dev/xen/balloon/balloon.c
+++ b/sys/dev/xen/balloon/balloon.c
@@ -255,7 +255,6 @@ increase_reservation(unsigned long nr_pages)
 
 		set_phys_to_machine(pfn, frame_list[i]);
 
-		vm_page_unwire(page, PQ_INACTIVE);
 		vm_page_free(page);
 	}
 
@@ -286,18 +285,26 @@ decrease_reservation(unsigned long nr_pages)
 	for (i = 0; i < nr_pages; i++) {
 		if ((page = vm_page_alloc(NULL, 0, 
 			    VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | 
-			    VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) {
+			    VM_ALLOC_ZERO)) == NULL) {
 			nr_pages = i;
 			need_sleep = 1;
 			break;
 		}
 
+		if ((page->flags & PG_ZERO) == 0) {
+			/*
+			 * Zero the page, or else we might be leaking
+			 * important data to other domains on the same
+			 * host.
+			 */
+			pmap_zero_page(page);
+		}
+
 		pfn = (VM_PAGE_TO_PHYS(page) >> PAGE_SHIFT);
 		frame_list[i] = PFNTOMFN(pfn);
 
 		set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 		if (balloon_append(page) != 0) {
-			vm_page_unwire(page, PQ_INACTIVE);
 			vm_page_free(page);
 
 			nr_pages = i;
-- 
1.7.7.5 (Apple Git-26)


--------------080308010102010404060706--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53A4501F.4020201>