Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 04 Nov 2014 17:48:59 +0000
From:      Steven Hartland <killing@multiplay.co.uk>
To:        George Kola <georgekola@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: ARC size limit
Message-ID:  <5459118B.7090904@multiplay.co.uk>
In-Reply-To: <F6BFA688-9D13-4DD9-8B8A-22F4ACFD7622@gmail.com>
References:  <3B70EA0C-0976-49D6-8418-6B5D22ED7E65@gmail.com> <54589722.3080803@multiplay.co.uk> <F6BFA688-9D13-4DD9-8B8A-22F4ACFD7622@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------040704030000030500010906
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

Try the attached, its the patch we'll be using when we roll out 10.1.

1. Apply with: cd /usr/src && patch < zfs-arc-refactor.patch
2. Rebuild your kernel
3. Install the new kernel
4. Reboot

     Regards
     Steve
On 04/11/2014 16:48, George Kola wrote:
> Thanks Steven. We are running 10.1rc3. Is the best way to get this patch to just compile 10-stable kernel ? We are new to running FreeBSD and hence the newbie question.
>
>
> Thanks,
> George
>
>
>
>> On Nov 4, 2014, at 1:06 AM, Steven Hartland <killing@multiplay.co.uk> wrote:
>>
>> You need https://svnweb.freebsd.org/base?view=revision&revision=272875
>> On 04/11/2014 06:29, George Kola wrote:
>>> Hi All,
>>>         This is my first post to freebsd-stable fresh of Meet BSD California 2014.  We are switching our entire production to FreeBSD. Our storage servers have  256 GB of RAM , 4 TB of SSD and 40 TB of spinning disks. We are running ZFS root and the SSD is configured as L2ARC.  We are running FreeBSD 10.1 RC3
>>>         I am finding that on all our machines, ARC is somehow limited to < 64 GB of memory and we have a huge inactive memory (180 G). The surprising thing is that ARC seems to have almost the same limit  (< 64 GB) on all of our storage boxes and ARC is not growing even though L2ARC hit shows that there is advantage in growing ARC.
>>>         Any help/pointers is appreciated.
>>>         What I am trying to do is to tune ZFS for our workload. We are hoping that we get a high hit rate.
>>>         Thanks to Justin Gibbs and Allan Jude for initial pointers and help. They suggested posting to the mailing list to get further help.
>>>
>>>         I have pasted top output and zfs-stats output below and yes UMA is enabled.
>>>
>>> Thanks,
>>> George
>>>         
>>>
>>> top
>>> last pid: 27458;  load averages:  3.30,  5.42,  5.34                                                                                                up 6+09:59:30  05:38:49
>>> 71 processes:  1 running, 70 sleeping
>>> CPU:  4.2% user,  0.0% nice,  4.6% system,  0.2% interrupt, 90.9% idle
>>> Mem: 11G Active, 181G Inact, 52G Wired, 1368M Cache, 4266M Free
>>> ARC: 47G Total, 1555M MFU, 41G MRU, 35M Anon, 3984M Header, 709M Other
>>> Swap: 64G Total, 2874M Used, 61G Free, 4% Inuse
>>>
>>>
>>>
>>> sysctl vfs.zfs.zio.use_uma
>>> vfs.zfs.zio.use_uma: 1
>>>
>>>
>>>
>>>
>>> zfs-mon -a output
>>>
>>> ZFS real-time cache activity monitor
>>> Seconds elapsed:  62
>>>
>>> Cache hits and misses:
>>>                                    1s    10s    60s    tot
>>>                       ARC hits:   124    126    103    101
>>>                     ARC misses:    35     46     29     28
>>>           ARC demand data hits:    55     90     61     61
>>>         ARC demand data misses:    20     32     18     17
>>>       ARC demand metadata hits:    69     36     42     40
>>>     ARC demand metadata misses:     9     13     10      9
>>>         ARC prefetch data hits:     0      0      0      0
>>>       ARC prefetch data misses:     6      1      1      1
>>>     ARC prefetch metadata hits:     0      0      0      0
>>>   ARC prefetch metadata misses:     0      0      0      0
>>>                     L2ARC hits:    16     28     14     14
>>>                   L2ARC misses:    19     18     15     14
>>>                    ZFETCH hits:   592   2842   2098   2047
>>>                  ZFETCH misses:   308   1326    507    494
>>>
>>> Cache efficiency percentage:
>>>                            10s    60s    tot
>>>                    ARC:  73.26  78.03  78.29
>>>        ARC demand data:  73.77  77.22  78.21
>>>    ARC demand metadata:  73.47  80.77  81.63
>>>      ARC prefetch data:   0.00   0.00   0.00
>>> ARC prefetch metadata:   0.00   0.00   0.00
>>>                  L2ARC:  60.87  48.28  50.00
>>>                 ZFETCH:  68.19  80.54  80.56
>>>
>>>
>>>
>>>
>>> zfs-stats -a output
>>>
>>> ZFS real-time cache activity monitor
>>> Seconds elapsed:  62
>>>
>>> Cache hits and misses:
>>>                                    1s    10s    60s    tot
>>>                       ARC hits:   124    126    103    101
>>>                     ARC misses:    35     46     29     28
>>>           ARC demand data hits:    55     90     61     61
>>>         ARC demand data misses:    20     32     18     17
>>>       ARC demand metadata hits:    69     36     42     40
>>>     ARC demand metadata misses:     9     13     10      9
>>>         ARC prefetch data hits:     0      0      0      0
>>>       ARC prefetch data misses:     6      1      1      1
>>>     ARC prefetch metadata hits:     0      0      0      0
>>>   ARC prefetch metadata misses:     0      0      0      0
>>>                     L2ARC hits:    16     28     14     14
>>>                   L2ARC misses:    19     18     15     14
>>>                    ZFETCH hits:   592   2842   2098   2047
>>>                  ZFETCH misses:   308   1326    507    494
>>>
>>> Cache efficiency percentage:
>>>                            10s    60s    tot
>>>                    ARC:  73.26  78.03  78.29
>>>        ARC demand data:  73.77  77.22  78.21
>>>    ARC demand metadata:  73.47  80.77  81.63
>>>      ARC prefetch data:   0.00   0.00   0.00
>>> ARC prefetch metadata:   0.00   0.00   0.00
>>>                  L2ARC:  60.87  48.28  50.00
>>>                 ZFETCH:  68.19  80.54  80.56
>>>
>>>
>>>
>>> _______________________________________________
>>> freebsd-stable@freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"


--------------040704030000030500010906
Content-Type: text/plain; charset=windows-1252;
 name="zfs-arc-refactor.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="zfs-arc-refactor.patch"

Index: sys/cddl/compat/opensolaris/kern/opensolaris_kmem.c
===================================================================
--- sys/cddl/compat/opensolaris/kern/opensolaris_kmem.c	(revision 274056)
+++ sys/cddl/compat/opensolaris/kern/opensolaris_kmem.c	(working copy)
@@ -133,13 +133,6 @@ kmem_size(void)
 	return (kmem_size_val);
 }
 
-uint64_t
-kmem_used(void)
-{
-
-	return (vmem_size(kmem_arena, VMEM_ALLOC));
-}
-
 static int
 kmem_std_constructor(void *mem, int size __unused, void *private, int flags)
 {
Index: sys/cddl/compat/opensolaris/sys/kmem.h
===================================================================
--- sys/cddl/compat/opensolaris/sys/kmem.h	(revision 274056)
+++ sys/cddl/compat/opensolaris/sys/kmem.h	(working copy)
@@ -66,7 +66,6 @@ typedef struct kmem_cache {
 void *zfs_kmem_alloc(size_t size, int kmflags);
 void zfs_kmem_free(void *buf, size_t size);
 uint64_t kmem_size(void);
-uint64_t kmem_used(void);
 kmem_cache_t *kmem_cache_create(char *name, size_t bufsize, size_t align,
     int (*constructor)(void *, void *, int), void (*destructor)(void *, void *),
     void (*reclaim)(void *) __unused, void *private, vmem_t *vmp, int cflags);
@@ -78,6 +77,9 @@ void kmem_reap(void);
 int kmem_debugging(void);
 void *calloc(size_t n, size_t s);
 
+#define	freemem				(cnt.v_free_count + cnt.v_cache_count)
+#define	minfree				cnt.v_free_min
+#define	heap_arena			kmem_arena
 #define	kmem_alloc(size, kmflags)	zfs_kmem_alloc((size), (kmflags))
 #define	kmem_zalloc(size, kmflags)	zfs_kmem_alloc((size), (kmflags) | M_ZERO)
 #define	kmem_free(buf, size)		zfs_kmem_free((buf), (size))
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===================================================================
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c	(revision 274056)
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c	(working copy)
@@ -138,6 +138,7 @@
 #include <sys/sdt.h>
 
 #include <vm/vm_pageout.h>
+#include <machine/vmparam.h>
 
 #ifdef illumos
 #ifndef _KERNEL
@@ -193,9 +194,6 @@ extern int zfs_prefetch_disable;
  */
 static boolean_t arc_warm;
 
-/*
- * These tunables are for performance analysis.
- */
 uint64_t zfs_arc_max;
 uint64_t zfs_arc_min;
 uint64_t zfs_arc_meta_limit = 0;
@@ -204,7 +202,20 @@ int zfs_arc_shrink_shift = 0;
 int zfs_arc_p_min_shift = 0;
 int zfs_disable_dup_eviction = 0;
 uint64_t zfs_arc_average_blocksize = 8 * 1024; /* 8KB */
+u_int zfs_arc_free_target = 0;
 
+static int sysctl_vfs_zfs_arc_free_target(SYSCTL_HANDLER_ARGS);
+
+#ifdef _KERNEL
+static void
+arc_free_target_init(void *unused __unused)
+{
+
+	zfs_arc_free_target = vm_pageout_wakeup_thresh;
+}
+SYSINIT(arc_free_target_init, SI_SUB_KTHREAD_PAGE, SI_ORDER_ANY,
+    arc_free_target_init, NULL);
+
 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max);
 TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min);
 TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit);
@@ -217,7 +228,37 @@ SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_
 SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_average_blocksize, CTLFLAG_RDTUN,
     &zfs_arc_average_blocksize, 0,
     "ARC average blocksize");
+/*
+ * We don't have a tunable for arc_free_target due to the dependency on
+ * pagedaemon initialisation.
+ */
+SYSCTL_PROC(_vfs_zfs, OID_AUTO, arc_free_target,
+    CTLTYPE_UINT | CTLFLAG_MPSAFE | CTLFLAG_RW, 0, sizeof(u_int),
+    sysctl_vfs_zfs_arc_free_target, "IU",
+    "Desired number of free pages below which ARC triggers reclaim");
 
+static int
+sysctl_vfs_zfs_arc_free_target(SYSCTL_HANDLER_ARGS)
+{
+	u_int val;
+	int err;
+
+	val = zfs_arc_free_target;
+	err = sysctl_handle_int(oidp, &val, 0, req);
+	if (err != 0 || req->newptr == NULL)
+		return (err);
+
+	if (val < minfree)
+		return (EINVAL);
+	if (val > cnt.v_page_count)
+		return (EINVAL);
+
+	zfs_arc_free_target = val;
+
+	return (0);
+}
+#endif
+
 /*
  * Note that buffers can be in one of 6 states:
  *	ARC_anon	- anonymous (discussed below)
@@ -2421,9 +2462,12 @@ arc_flush(spa_t *spa)
 void
 arc_shrink(void)
 {
+
 	if (arc_c > arc_c_min) {
 		uint64_t to_free;
 
+		DTRACE_PROBE4(arc__shrink, uint64_t, arc_c, uint64_t,
+			arc_c_min, uint64_t, arc_p, uint64_t, to_free);
 #ifdef _KERNEL
 		to_free = arc_c >> arc_shrink_shift;
 #else
@@ -2439,12 +2483,19 @@ arc_shrink(void)
 			arc_c = MAX(arc_size, arc_c_min);
 		if (arc_p > arc_c)
 			arc_p = (arc_c >> 1);
+
+		DTRACE_PROBE2(arc__shrunk, uint64_t, arc_c, uint64_t,
+			arc_p);
+
 		ASSERT(arc_c >= arc_c_min);
 		ASSERT((int64_t)arc_p >= 0);
 	}
 
-	if (arc_size > arc_c)
+	if (arc_size > arc_c) {
+		DTRACE_PROBE2(arc__shrink_adjust, uint64_t, arc_size,
+			uint64_t, arc_c);
 		arc_adjust();
+	}
 }
 
 static int needfree = 0;
@@ -2455,15 +2506,20 @@ arc_reclaim_needed(void)
 
 #ifdef _KERNEL
 
-	if (needfree)
+	if (needfree) {
+		DTRACE_PROBE(arc__reclaim_needfree);
 		return (1);
+	}
 
 	/*
 	 * Cooperate with pagedaemon when it's time for it to scan
 	 * and reclaim some pages.
 	 */
-	if (vm_paging_needed())
+	if (freemem < zfs_arc_free_target) {
+		DTRACE_PROBE2(arc__reclaim_freemem, uint64_t,
+		    freemem, uint64_t, zfs_arc_free_target);
 		return (1);
+	}
 
 #ifdef sun
 	/*
@@ -2491,8 +2547,19 @@ arc_reclaim_needed(void)
 	if (availrmem < swapfs_minfree + swapfs_reserve + extra)
 		return (1);
 
-#if defined(__i386)
 	/*
+	 * Check that we have enough availrmem that memory locking (e.g., via
+	 * mlock(3C) or memcntl(2)) can still succeed.  (pages_pp_maximum
+	 * stores the number of pages that cannot be locked; when availrmem
+	 * drops below pages_pp_maximum, page locking mechanisms such as
+	 * page_pp_lock() will fail.)
+	 */
+	if (availrmem <= pages_pp_maximum)
+		return (1);
+
+#endif	/* sun */
+#if defined(__i386) || !defined(UMA_MD_SMALL_ALLOC)
+	/*
 	 * If we're on an i386 platform, it's possible that we'll exhaust the
 	 * kernel heap space before we ever run out of available physical
 	 * memory.  Most checks of the size of the heap_area compare against
@@ -2503,19 +2570,35 @@ arc_reclaim_needed(void)
 	 * heap is allocated.  (Or, in the calculation, if less than 1/4th is
 	 * free)
 	 */
-	if (btop(vmem_size(heap_arena, VMEM_FREE)) <
-	    (btop(vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC)) >> 2))
+	if (vmem_size(heap_arena, VMEM_FREE) <
+	    (vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC) >> 2)) {
+		DTRACE_PROBE2(arc__reclaim_used, uint64_t,
+		    vmem_size(heap_arena, VMEM_FREE), uint64_t,
+		    (vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC)) >> 2);
 		return (1);
+	}
 #endif
-#else	/* !sun */
-	if (kmem_used() > (kmem_size() * 3) / 4)
+#ifdef sun
+	/*
+	 * If zio data pages are being allocated out of a separate heap segment,
+	 * then enforce that the size of available vmem for this arena remains
+	 * above about 1/16th free.
+	 *
+	 * Note: The 1/16th arena free requirement was put in place
+	 * to aggressively evict memory from the arc in order to avoid
+	 * memory fragmentation issues.
+	 */
+	if (zio_arena != NULL &&
+	    vmem_size(zio_arena, VMEM_FREE) <
+	    (vmem_size(zio_arena, VMEM_ALLOC) >> 4))
 		return (1);
 #endif	/* sun */
-
-#else
+#else	/* _KERNEL */
 	if (spa_get_random(100) == 0)
 		return (1);
-#endif
+#endif	/* _KERNEL */
+	DTRACE_PROBE(arc__reclaim_no);
+
 	return (0);
 }
 
@@ -2522,7 +2605,7 @@ arc_reclaim_needed(void)
 extern kmem_cache_t	*zio_buf_cache[];
 extern kmem_cache_t	*zio_data_buf_cache[];
 
-static void
+static void __noinline
 arc_kmem_reap_now(arc_reclaim_strategy_t strat)
 {
 	size_t			i;
@@ -2529,6 +2612,7 @@ arc_kmem_reap_now(arc_reclaim_strategy_t strat)
 	kmem_cache_t		*prev_cache = NULL;
 	kmem_cache_t		*prev_data_cache = NULL;
 
+	DTRACE_PROBE(arc__kmem_reap_start);
 #ifdef _KERNEL
 	if (arc_meta_used >= arc_meta_limit) {
 		/*
@@ -2564,6 +2648,16 @@ arc_kmem_reap_now(arc_reclaim_strategy_t strat)
 	}
 	kmem_cache_reap_now(buf_cache);
 	kmem_cache_reap_now(hdr_cache);
+
+#ifdef sun
+	/*
+	 * Ask the vmem arena to reclaim unused memory from its
+	 * quantum caches.
+	 */
+	if (zio_arena != NULL && strat == ARC_RECLAIM_AGGR)
+		vmem_qcache_reap(zio_arena);
+#endif
+	DTRACE_PROBE(arc__kmem_reap_end);
 }
 
 static void
@@ -2581,6 +2675,7 @@ arc_reclaim_thread(void *dummy __unused)
 
 			if (arc_no_grow) {
 				if (last_reclaim == ARC_RECLAIM_CONS) {
+					DTRACE_PROBE(arc__reclaim_aggr_no_grow);
 					last_reclaim = ARC_RECLAIM_AGGR;
 				} else {
 					last_reclaim = ARC_RECLAIM_CONS;
@@ -2588,6 +2683,7 @@ arc_reclaim_thread(void *dummy __unused)
 			} else {
 				arc_no_grow = TRUE;
 				last_reclaim = ARC_RECLAIM_AGGR;
+				DTRACE_PROBE(arc__reclaim_aggr);
 				membar_producer();
 			}
 
@@ -2692,6 +2788,7 @@ arc_adapt(int bytes, arc_state_t *state)
 	 * cache size, increment the target cache size
 	 */
 	if (arc_size > arc_c - (2ULL << SPA_MAXBLOCKSHIFT)) {
+		DTRACE_PROBE1(arc__inc_adapt, int, bytes);
 		atomic_add_64(&arc_c, (int64_t)bytes);
 		if (arc_c > arc_c_max)
 			arc_c = arc_c_max;
@@ -2713,20 +2810,6 @@ arc_evict_needed(arc_buf_contents_t type)
 	if (type == ARC_BUFC_METADATA && arc_meta_used >= arc_meta_limit)
 		return (1);
 
-#ifdef sun
-#ifdef _KERNEL
-	/*
-	 * If zio data pages are being allocated out of a separate heap segment,
-	 * then enforce that the size of available vmem for this area remains
-	 * above about 1/32nd free.
-	 */
-	if (type == ARC_BUFC_DATA && zio_arena != NULL &&
-	    vmem_size(zio_arena, VMEM_FREE) <
-	    (vmem_size(zio_arena, VMEM_ALLOC) >> 5))
-		return (1);
-#endif
-#endif	/* sun */
-
 	if (arc_reclaim_needed())
 		return (1);
 
@@ -3885,20 +3968,16 @@ static int
 arc_memory_throttle(uint64_t reserve, uint64_t txg)
 {
 #ifdef _KERNEL
-	uint64_t available_memory =
-	    ptoa((uintmax_t)cnt.v_free_count + cnt.v_cache_count);
+	uint64_t available_memory = ptob(freemem);
 	static uint64_t page_load = 0;
 	static uint64_t last_txg = 0;
 
-#ifdef sun
-#if defined(__i386)
+#if defined(__i386) || !defined(UMA_MD_SMALL_ALLOC)
 	available_memory =
-	    MIN(available_memory, vmem_size(heap_arena, VMEM_FREE));
+	    MIN(available_memory, ptob(vmem_size(heap_arena, VMEM_FREE)));
 #endif
-#endif	/* sun */
 
-	if (cnt.v_free_count + cnt.v_cache_count >
-	    (uint64_t)physmem * arc_lotsfree_percent / 100)
+	if (freemem > (uint64_t)physmem * arc_lotsfree_percent / 100)
 		return (0);
 
 	if (txg > last_txg) {
@@ -3911,7 +3990,7 @@ arc_memory_throttle(uint64_t reserve, uint64_t txg
 	 * continue to let page writes occur as quickly as possible.
 	 */
 	if (curproc == pageproc) {
-		if (page_load > available_memory / 4)
+		if (page_load > MAX(ptob(minfree), available_memory) / 4)
 			return (SET_ERROR(ERESTART));
 		/* Note: reserve is inflated, so we deflate */
 		page_load += reserve / 8;
@@ -3939,8 +4018,10 @@ arc_tempreserve_space(uint64_t reserve, uint64_t t
 	int error;
 	uint64_t anon_size;
 
-	if (reserve > arc_c/4 && !arc_no_grow)
+	if (reserve > arc_c/4 && !arc_no_grow) {
 		arc_c = MIN(arc_c_max, reserve * 4);
+		DTRACE_PROBE1(arc__set_reserve, uint64_t, arc_c);
+	}
 	if (reserve > arc_c)
 		return (SET_ERROR(ENOMEM));
 
@@ -3994,6 +4075,7 @@ arc_lowmem(void *arg __unused, int howto __unused)
 	mutex_enter(&arc_lowmem_lock);
 	mutex_enter(&arc_reclaim_thr_lock);
 	needfree = 1;
+	DTRACE_PROBE(arc__needfree);
 	cv_signal(&arc_reclaim_thr_cv);
 
 	/*
Index: sys/vm/vm_pageout.c
===================================================================
--- sys/vm/vm_pageout.c	(revision 274056)
+++ sys/vm/vm_pageout.c	(working copy)
@@ -76,6 +76,7 @@
 __FBSDID("$FreeBSD$");
 
 #include "opt_vm.h"
+#include "opt_kdtrace.h"
 #include <sys/param.h>
 #include <sys/systm.h>
 #include <sys/kernel.h>
@@ -89,6 +90,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/racct.h>
 #include <sys/resourcevar.h>
 #include <sys/sched.h>
+#include <sys/sdt.h>
 #include <sys/signalvar.h>
 #include <sys/smp.h>
 #include <sys/vnode.h>
@@ -115,10 +117,14 @@ __FBSDID("$FreeBSD$");
 
 /* the kernel process "vm_pageout"*/
 static void vm_pageout(void);
+static void vm_pageout_init(void);
 static int vm_pageout_clean(vm_page_t);
 static void vm_pageout_scan(struct vm_domain *vmd, int pass);
 static void vm_pageout_mightbe_oom(struct vm_domain *vmd, int pass);
 
+SYSINIT(pagedaemon_init, SI_SUB_KTHREAD_PAGE, SI_ORDER_FIRST, vm_pageout_init,
+    NULL);
+
 struct proc *pageproc;
 
 static struct kproc_desc page_kp = {
@@ -126,9 +132,13 @@ static struct kproc_desc page_kp = {
 	vm_pageout,
 	&pageproc
 };
-SYSINIT(pagedaemon, SI_SUB_KTHREAD_PAGE, SI_ORDER_FIRST, kproc_start,
+SYSINIT(pagedaemon, SI_SUB_KTHREAD_PAGE, SI_ORDER_SECOND, kproc_start,
     &page_kp);
 
+SDT_PROVIDER_DEFINE(vm);
+SDT_PROBE_DEFINE(vm, , , vm__lowmem_cache);
+SDT_PROBE_DEFINE(vm, , , vm__lowmem_scan);
+
 #if !defined(NO_SWAPPING)
 /* the kernel process "vm_daemon"*/
 static void vm_daemon(void);
@@ -663,6 +673,7 @@ vm_pageout_grow_cache(int tries, vm_paddr_t low, v
 		 * may acquire locks and/or sleep, so they can only be invoked
 		 * when "tries" is greater than zero.
 		 */
+		SDT_PROBE0(vm, , , vm__lowmem_cache);
 		EVENTHANDLER_INVOKE(vm_lowmem, 0);
 
 		/*
@@ -925,6 +936,7 @@ vm_pageout_scan(struct vm_domain *vmd, int pass)
 		/*
 		 * Decrease registered cache sizes.
 		 */
+		SDT_PROBE0(vm, , , vm__lowmem_scan);
 		EVENTHANDLER_INVOKE(vm_lowmem, 0);
 		/*
 		 * We do this explicitly after the caches have been
@@ -1650,15 +1662,11 @@ vm_pageout_worker(void *arg)
 }
 
 /*
- *	vm_pageout is the high level pageout daemon.
+ *	vm_pageout_init initialises basic pageout daemon settings.
  */
 static void
-vm_pageout(void)
+vm_pageout_init(void)
 {
-#if MAXMEMDOM > 1
-	int error, i;
-#endif
-
 	/*
 	 * Initialize some paging parameters.
 	 */
@@ -1704,7 +1712,18 @@ static void
 	/* XXX does not really belong here */
 	if (vm_page_max_wired == 0)
 		vm_page_max_wired = cnt.v_free_count / 3;
+}
 
+/*
+ *     vm_pageout is the high level pageout daemon.
+ */
+static void
+vm_pageout(void)
+{
+#if MAXMEMDOM > 1
+	int error, i;
+#endif
+
 	swap_pager_swap_init();
 #if MAXMEMDOM > 1
 	for (i = 1; i < vm_ndomains; i++) {
Index: .
===================================================================
--- .	(revision 274056)
+++ .	(working copy)

Property changes on: .
___________________________________________________________________
Modified: svn:mergeinfo
   Merged /head:r270759,270861,272483
   Merged /stable/10:r272875

--------------040704030000030500010906--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5459118B.7090904>