Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 17 Dec 2015 01:31:26 +0000 (UTC)
From:      Mark Johnston <markj@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-user@freebsd.org
Subject:   svn commit: r292392 - user/alc/PQ_LAUNDRY/sys/vm
Message-ID:  <201512170131.tBH1VQB2055056@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: markj
Date: Thu Dec 17 01:31:26 2015
New Revision: 292392
URL: https://svnweb.freebsd.org/changeset/base/292392

Log:
  Weigh dirty and clean pages differently when scanning the active queue.
  
  During a page shortage, clean pages can be reclaimed much more quickly than
  dirty pages and thus provide more immediate utility to the system. Dirty
  pages must first be laundered and therefore cannot contribute towards the
  shortfall until after some I/O completes. This change modifies the active
  queue scan to have clean pages count more heavily towards a shortage than
  dirty pages. The inactive queue target is also scaled accordingly so that
  we scan for dirty pages more aggressively than clean pages, causing the
  laundry thread to start working sooner than it would otherwise, and
  improving its ability to cluster pages.
  
  The weight is set by the vm.act_scan_laundry_weight sysctl; clean and dirty
  pages are given an equal weight by setting this sysctl to 1.
  
  Reviewed by:	alc

Modified:
  user/alc/PQ_LAUNDRY/sys/vm/vm_pageout.c

Modified: user/alc/PQ_LAUNDRY/sys/vm/vm_pageout.c
==============================================================================
--- user/alc/PQ_LAUNDRY/sys/vm/vm_pageout.c	Thu Dec 17 01:16:33 2015	(r292391)
+++ user/alc/PQ_LAUNDRY/sys/vm/vm_pageout.c	Thu Dec 17 01:31:26 2015	(r292392)
@@ -226,6 +226,11 @@ SYSCTL_INT(_vm, OID_AUTO, pageout_oom_se
 	CTLFLAG_RW, &vm_pageout_oom_seq, 0,
 	"back-to-back calls to oom detector to start OOM");
 
+static int act_scan_laundry_weight = 3;
+SYSCTL_INT(_vm, OID_AUTO, act_scan_laundry_weight,
+	CTLFLAG_RW, &act_scan_laundry_weight, 0,
+	"weight given to clean vs. dirty pages in active queue scans");
+
 #define VM_PAGEOUT_PAGE_COUNT 16
 int vm_pageout_page_count = VM_PAGEOUT_PAGE_COUNT;
 
@@ -1494,10 +1499,19 @@ drop_page:
 	/*
 	 * Compute the number of pages we want to try to move from the
 	 * active queue to either the inactive or laundry queue.
+	 *
+	 * When scanning active pages, we make clean pages count more heavily
+	 * towards the page shortage than dirty pages.  This is because dirty
+	 * pages must be laundered before they can be reused and thus have less
+	 * utility when attempting to quickly alleviate a shortage.  However,
+	 * this weighting also causes the scan to deactivate dirty pages more
+	 * more aggressively, improving the effectiveness of clustering and
+	 * ensuring that they can eventually be reused.
 	 */
 	page_shortage = vm_cnt.v_inactive_target - (vm_cnt.v_inactive_count +
-	    vm_cnt.v_laundry_count) + vm_paging_target() + deficit +
-	    addl_page_shortage;
+	    vm_cnt.v_laundry_count / act_scan_laundry_weight) +
+	    vm_paging_target() + deficit + addl_page_shortage;
+	page_shortage *= act_scan_laundry_weight;
 
 	pq = &vmd->vmd_pagequeues[PQ_ACTIVE];
 	vm_pagequeue_lock(pq);
@@ -1578,7 +1592,7 @@ drop_page:
 			m->act_count -= min(m->act_count, ACT_DECLINE);
 
 		/*
-		 * Move this page to the tail of the active or inactive
+		 * Move this page to the tail of the active, inactive or laundry
 		 * queue depending on usage.
 		 */
 		if (m->act_count == 0) {
@@ -1588,11 +1602,13 @@ drop_page:
 			if (m->object->ref_count != 0)
 				vm_page_test_dirty(m);
 #endif
-			if (m->dirty == 0)
+			if (m->dirty == 0) {
 				vm_page_deactivate(m);
-			else
+				page_shortage -= act_scan_laundry_weight;
+			} else {
 				vm_page_launder(m);
-			page_shortage--;
+				page_shortage--;
+			}
 		} else
 			vm_page_requeue_locked(m);
 		vm_page_unlock(m);



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201512170131.tBH1VQB2055056>