From owner-svn-src-user@freebsd.org Thu Feb 8 07:52:34 2018 Return-Path: Delivered-To: svn-src-user@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4A4AFF0BC6B for ; Thu, 8 Feb 2018 07:52:34 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F04D1791EF; Thu, 8 Feb 2018 07:52:33 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id EB1AF1332A; Thu, 8 Feb 2018 07:52:33 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w187qXT2051163; Thu, 8 Feb 2018 07:52:33 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w187qVHJ051135; Thu, 8 Feb 2018 07:52:31 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <201802080752.w187qVHJ051135@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 8 Feb 2018 07:52:31 +0000 (UTC) To: src-committers@freebsd.org, svn-src-user@freebsd.org Subject: svn commit: r329014 - in user/jeff/numa/sys: amd64/amd64 arm/arm arm64/arm64 compat/linprocfs compat/linux i386/i386 kern mips/mips powerpc/booke powerpc/powerpc riscv/riscv sparc64/sparc64 sys vm X-SVN-Group: user X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: in user/jeff/numa/sys: amd64/amd64 arm/arm arm64/arm64 compat/linprocfs compat/linux i386/i386 kern mips/mips powerpc/booke powerpc/powerpc riscv/riscv sparc64/sparc64 sys vm X-SVN-Commit-Revision: 329014 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Feb 2018 07:52:34 -0000 Author: jeff Date: Thu Feb 8 07:52:30 2018 New Revision: 329014 URL: https://svnweb.freebsd.org/changeset/base/329014 Log: Use a per-cpu counter for v_wire_count Modified: user/jeff/numa/sys/amd64/amd64/efirt_machdep.c user/jeff/numa/sys/amd64/amd64/pmap.c user/jeff/numa/sys/amd64/amd64/uma_machdep.c user/jeff/numa/sys/arm/arm/pmap-v6.c user/jeff/numa/sys/arm64/arm64/efirt_machdep.c user/jeff/numa/sys/arm64/arm64/pmap.c user/jeff/numa/sys/arm64/arm64/uma_machdep.c user/jeff/numa/sys/compat/linprocfs/linprocfs.c user/jeff/numa/sys/compat/linux/linux_misc.c user/jeff/numa/sys/i386/i386/pmap.c user/jeff/numa/sys/kern/kern_mib.c user/jeff/numa/sys/kern/subr_pcpu.c user/jeff/numa/sys/kern/vfs_bio.c user/jeff/numa/sys/mips/mips/pmap.c user/jeff/numa/sys/mips/mips/uma_machdep.c user/jeff/numa/sys/powerpc/booke/pmap.c user/jeff/numa/sys/powerpc/powerpc/uma_machdep.c user/jeff/numa/sys/riscv/riscv/pmap.c user/jeff/numa/sys/sparc64/sparc64/pmap.c user/jeff/numa/sys/sparc64/sparc64/vm_machdep.c user/jeff/numa/sys/sys/pmc.h user/jeff/numa/sys/sys/vmmeter.h user/jeff/numa/sys/vm/swap_pager.c user/jeff/numa/sys/vm/vm_glue.c user/jeff/numa/sys/vm/vm_meter.c user/jeff/numa/sys/vm/vm_mmap.c user/jeff/numa/sys/vm/vm_page.c Modified: user/jeff/numa/sys/amd64/amd64/efirt_machdep.c ============================================================================== --- user/jeff/numa/sys/amd64/amd64/efirt_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/amd64/amd64/efirt_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -74,8 +74,8 @@ efi_destroy_1t1_map(void) VM_OBJECT_RLOCK(obj_1t1_pt); TAILQ_FOREACH(m, &obj_1t1_pt->memq, listq) m->wire_count = 0; - atomic_subtract_int(&vm_cnt.v_wire_count, - obj_1t1_pt->resident_page_count); + VM_CNT_ADD(v_wire_count, + -obj_1t1_pt->resident_page_count); VM_OBJECT_RUNLOCK(obj_1t1_pt); vm_object_deallocate(obj_1t1_pt); } Modified: user/jeff/numa/sys/amd64/amd64/pmap.c ============================================================================== --- user/jeff/numa/sys/amd64/amd64/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/amd64/amd64/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -2372,7 +2372,7 @@ pmap_free_zero_pages(struct spglist *free) /* Preserve the page's PG_ZERO setting. */ vm_page_free_toq(m); } - atomic_subtract_int(&vm_cnt.v_wire_count, count); + VM_CNT_ADD(v_wire_count, -count); } /* @@ -2723,8 +2723,7 @@ _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, str /* Have to allocate a new pdp, recurse */ if (_pmap_allocpte(pmap, NUPDE + NUPDPE + pml4index, lockp) == NULL) { - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -2756,8 +2755,7 @@ _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, str /* Have to allocate a new pd, recurse */ if (_pmap_allocpte(pmap, NUPDE + pdpindex, lockp) == NULL) { - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -2770,9 +2768,7 @@ _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, str /* Have to allocate a new pd, recurse */ if (_pmap_allocpte(pmap, NUPDE + pdpindex, lockp) == NULL) { - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, - 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -2904,14 +2900,12 @@ pmap_release(pmap_t pmap) pmap->pm_pml4[DMPML4I + i] = 0; pmap->pm_pml4[PML4PML4I] = 0; /* Recursive Mapping */ - m->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); if (pmap->pm_pml4u != NULL) { m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_pml4u)); - m->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free(m); } } @@ -7711,10 +7705,8 @@ pmap_pti_free_page(vm_page_t m) { KASSERT(m->wire_count > 0, ("page %p not wired", m)); - m->wire_count--; - if (m->wire_count != 0) + if (vm_page_unwire(m, PQ_NONE) == false) return (false); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); vm_page_free_zero(m); return (true); } Modified: user/jeff/numa/sys/amd64/amd64/uma_machdep.c ============================================================================== --- user/jeff/numa/sys/amd64/amd64/uma_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/amd64/amd64/uma_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -74,7 +74,6 @@ uma_small_free(void *mem, vm_size_t size, u_int8_t fla pa = DMAP_TO_PHYS((vm_offset_t)mem); dump_drop_page(pa); m = PHYS_TO_VM_PAGE(pa); - m->wire_count--; + vm_page_unwire(m, PQ_NONE); vm_page_free(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); } Modified: user/jeff/numa/sys/arm/arm/pmap-v6.c ============================================================================== --- user/jeff/numa/sys/arm/arm/pmap-v6.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/arm/arm/pmap-v6.c Thu Feb 8 07:52:30 2018 (r329014) @@ -2634,11 +2634,12 @@ pmap_unwire_pt2pg(pmap_t pmap, vm_offset_t va, vm_page pmap->pm_stats.resident_count--; /* - * This is a release store so that the ordinary store unmapping + * This barrier is so that the ordinary store unmapping * the L2 page table page is globally performed before TLB shoot- * down is begun. */ - atomic_subtract_rel_int(&vm_cnt.v_wire_count, 1); + wmb(); + VM_CNT_ADD(v_wire_count, -1); } /* @@ -2945,7 +2946,7 @@ out: SLIST_REMOVE_HEAD(&free, plinks.s.ss); /* Recycle a freed page table page. */ m_pc->wire_count = 1; - atomic_add_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, 1); } pmap_free_zero_pages(&free); return (m_pc); Modified: user/jeff/numa/sys/arm64/arm64/efirt_machdep.c ============================================================================== --- user/jeff/numa/sys/arm64/arm64/efirt_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/arm64/arm64/efirt_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -75,8 +75,8 @@ efi_destroy_1t1_map(void) VM_OBJECT_RLOCK(obj_1t1_pt); TAILQ_FOREACH(m, &obj_1t1_pt->memq, listq) m->wire_count = 0; - atomic_subtract_int(&vm_cnt.v_wire_count, - obj_1t1_pt->resident_page_count); + VM_CNT_ADD(v_wire_count, + -obj_1t1_pt->resident_page_count); VM_OBJECT_RUNLOCK(obj_1t1_pt); vm_object_deallocate(obj_1t1_pt); } Modified: user/jeff/numa/sys/arm64/arm64/pmap.c ============================================================================== --- user/jeff/numa/sys/arm64/arm64/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/arm64/arm64/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1363,11 +1363,12 @@ _pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t pmap_invalidate_page(pmap, va); /* - * This is a release store so that the ordinary store unmapping + * This barrier is so that the ordinary store unmapping * the page table page is globally performed before TLB shoot- * down is begun. */ - atomic_subtract_rel_int(&vm_cnt.v_wire_count, 1); + wmb(); + VM_CNT_ADD(v_wire_count, -1); /* * Put page on a list so that it is released after @@ -1493,9 +1494,8 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str /* recurse for allocating page dir */ if (_pmap_alloc_l3(pmap, NUL2E + NUL1E + l0index, lockp) == NULL) { - --m->wire_count; /* XXX: release mem barrier? */ - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -1521,8 +1521,7 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str /* recurse for allocating page dir */ if (_pmap_alloc_l3(pmap, NUL2E + l1index, lockp) == NULL) { - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -1537,10 +1536,8 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str /* recurse for allocating page dir */ if (_pmap_alloc_l3(pmap, NUL2E + l1index, lockp) == NULL) { - --m->wire_count; /* XXX: release mem barrier? */ - atomic_subtract_int( - &vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -1648,8 +1645,7 @@ pmap_release(pmap_t pmap) m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_l0)); - m->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); } @@ -1919,7 +1915,7 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **l SLIST_REMOVE_HEAD(&free, plinks.s.ss); /* Recycle a freed page table page. */ m_pc->wire_count = 1; - atomic_add_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, 1); } pmap_free_zero_pages(&free); return (m_pc); @@ -2276,9 +2272,9 @@ pmap_remove_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_ pmap_resident_count_dec(pmap, 1); KASSERT(ml3->wire_count == NL3PG, ("pmap_remove_pages: l3 page wire count error")); - ml3->wire_count = 0; + ml3->wire_count = 1; + vm_page_unwire(ml3, PQ_NONE); pmap_add_delayed_free_list(ml3, free, FALSE); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); } return (pmap_unuse_pt(pmap, sva, l1e, free)); } @@ -3723,11 +3719,10 @@ pmap_remove_pages(pmap_t pmap) pmap_resident_count_dec(pmap,1); KASSERT(ml3->wire_count == NL3PG, ("pmap_remove_pages: l3 page wire count error")); - ml3->wire_count = 0; + ml3->wire_count = 1; + vm_page_unwire(ml3, PQ_NONE); pmap_add_delayed_free_list(ml3, &free, FALSE); - atomic_subtract_int( - &vm_cnt.v_wire_count, 1); } break; case 2: Modified: user/jeff/numa/sys/arm64/arm64/uma_machdep.c ============================================================================== --- user/jeff/numa/sys/arm64/arm64/uma_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/arm64/arm64/uma_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -72,7 +72,6 @@ uma_small_free(void *mem, vm_size_t size, u_int8_t fla pa = DMAP_TO_PHYS((vm_offset_t)mem); dump_drop_page(pa); m = PHYS_TO_VM_PAGE(pa); - m->wire_count--; + vm_page_unwire(m, PQ_NONE); vm_page_free(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); } Modified: user/jeff/numa/sys/compat/linprocfs/linprocfs.c ============================================================================== --- user/jeff/numa/sys/compat/linprocfs/linprocfs.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/compat/linprocfs/linprocfs.c Thu Feb 8 07:52:30 2018 (r329014) @@ -163,7 +163,7 @@ linprocfs_domeminfo(PFS_FILL_ARGS) * is very little memory left, so we cheat and tell them that * all memory that isn't wired down is free. */ - memused = vm_cnt.v_wire_count * PAGE_SIZE; + memused = vm_wire_count() * PAGE_SIZE; memfree = memtotal - memused; swap_pager_status(&i, &j); swaptotal = (unsigned long long)i * PAGE_SIZE; Modified: user/jeff/numa/sys/compat/linux/linux_misc.c ============================================================================== --- user/jeff/numa/sys/compat/linux/linux_misc.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/compat/linux/linux_misc.c Thu Feb 8 07:52:30 2018 (r329014) @@ -165,7 +165,7 @@ linux_sysinfo(struct thread *td, struct linux_sysinfo_ LINUX_SYSINFO_LOADS_SCALE / averunnable.fscale; sysinfo.totalram = physmem * PAGE_SIZE; - sysinfo.freeram = sysinfo.totalram - vm_cnt.v_wire_count * PAGE_SIZE; + sysinfo.freeram = sysinfo.totalram - vm_wire_count() * PAGE_SIZE; sysinfo.sharedram = 0; mtx_lock(&vm_object_list_mtx); Modified: user/jeff/numa/sys/i386/i386/pmap.c ============================================================================== --- user/jeff/numa/sys/i386/i386/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/i386/i386/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1718,7 +1718,7 @@ pmap_free_zero_pages(struct spglist *free) /* Preserve the page's PG_ZERO setting. */ vm_page_free_toq(m); } - atomic_subtract_int(&vm_cnt.v_wire_count, count); + VM_CNT_ADD(v_wire_count, -count); } /* @@ -2060,7 +2060,7 @@ pmap_release(pmap_t pmap) m->wire_count--; vm_page_free_zero(m); } - atomic_subtract_int(&vm_cnt.v_wire_count, NPGPTD); + VM_CNT_ADD(v_wire_count, -NPGPTD); } static int Modified: user/jeff/numa/sys/kern/kern_mib.c ============================================================================== --- user/jeff/numa/sys/kern/kern_mib.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/kern/kern_mib.c Thu Feb 8 07:52:30 2018 (r329014) @@ -206,7 +206,7 @@ sysctl_hw_usermem(SYSCTL_HANDLER_ARGS) { u_long val; - val = ctob(physmem - vm_cnt.v_wire_count); + val = ctob(physmem - vm_wire_count()); return (sysctl_handle_long(oidp, &val, 0, req)); } Modified: user/jeff/numa/sys/kern/subr_pcpu.c ============================================================================== --- user/jeff/numa/sys/kern/subr_pcpu.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/kern/subr_pcpu.c Thu Feb 8 07:52:30 2018 (r329014) @@ -151,7 +151,7 @@ pcpu_zones_startup(void) pcpu_zone_ptr = uma_zcreate("ptr pcpu", sizeof(void *), NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, UMA_ZONE_PCPU); } -SYSINIT(pcpu_zones, SI_SUB_KMEM, SI_ORDER_ANY, pcpu_zones_startup, NULL); +SYSINIT(pcpu_zones, SI_SUB_VM, SI_ORDER_ANY, pcpu_zones_startup, NULL); /* * First-fit extent based allocator for allocating space in the per-cpu Modified: user/jeff/numa/sys/kern/vfs_bio.c ============================================================================== --- user/jeff/numa/sys/kern/vfs_bio.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/kern/vfs_bio.c Thu Feb 8 07:52:30 2018 (r329014) @@ -4712,7 +4712,7 @@ vm_hold_free_pages(struct buf *bp, int newbsize) p->wire_count--; vm_page_free(p); } - atomic_subtract_int(&vm_cnt.v_wire_count, bp->b_npages - newnpages); + VM_CNT_ADD(v_wire_count, -(bp->b_npages - newnpages)); bp->b_npages = newnpages; } Modified: user/jeff/numa/sys/mips/mips/pmap.c ============================================================================== --- user/jeff/numa/sys/mips/mips/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/mips/mips/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1009,7 +1009,7 @@ _pmap_unwire_ptp(pmap_t pmap, vm_offset_t va, vm_page_ * If the page is finally unwired, simply free it. */ vm_page_free_zero(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); } /* @@ -1159,8 +1159,7 @@ _pmap_allocpte(pmap_t pmap, unsigned ptepindex, u_int if (_pmap_allocpte(pmap, NUPDE + segindex, flags) == NULL) { /* alloc failed, release current */ - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -1238,8 +1237,7 @@ pmap_release(pmap_t pmap) ptdva = (vm_offset_t)pmap->pm_segtab; ptdpg = PHYS_TO_VM_PAGE(MIPS_DIRECT_TO_PHYS(ptdva)); - ptdpg->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(ptdpg, PQ_NONE); vm_page_free_zero(ptdpg); } Modified: user/jeff/numa/sys/mips/mips/uma_machdep.c ============================================================================== --- user/jeff/numa/sys/mips/mips/uma_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/mips/mips/uma_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -94,7 +94,6 @@ uma_small_free(void *mem, vm_size_t size, u_int8_t fla pa = MIPS_DIRECT_TO_PHYS((vm_offset_t)mem); dump_drop_page(pa); m = PHYS_TO_VM_PAGE(pa); - m->wire_count--; + vm_page_unwire(m, PQ_NONE); vm_page_free(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); } Modified: user/jeff/numa/sys/powerpc/booke/pmap.c ============================================================================== --- user/jeff/numa/sys/powerpc/booke/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/powerpc/booke/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -681,7 +681,7 @@ pdir_free(mmu_t mmu, pmap_t pmap, unsigned int pp2d_id pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); pmap_kremove(va); } @@ -786,7 +786,7 @@ ptbl_alloc(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsi ptbl_free_pmap_ptbl(pmap, ptbl); for (j = 0; j < i; j++) vm_page_free(mtbl[j]); - atomic_subtract_int(&vm_cnt.v_wire_count, i); + VM_CNT_ADD(v_wire_count, -i); return (NULL); } VM_WAIT; @@ -828,7 +828,7 @@ ptbl_free(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsig pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); pmap_kremove(va); } @@ -1030,7 +1030,7 @@ ptbl_alloc(mmu_t mmu, pmap_t pmap, unsigned int pdir_i ptbl_free_pmap_ptbl(pmap, ptbl); for (j = 0; j < i; j++) vm_page_free(mtbl[j]); - atomic_subtract_int(&vm_cnt.v_wire_count, i); + VM_CNT_ADD(v_wire_count, -i); return (NULL); } VM_WAIT; @@ -1091,7 +1091,7 @@ ptbl_free(mmu_t mmu, pmap_t pmap, unsigned int pdir_id pa = pte_vatopa(mmu, kernel_pmap, va); m = PHYS_TO_VM_PAGE(pa); vm_page_free_zero(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); mmu_booke_kremove(mmu, va); } Modified: user/jeff/numa/sys/powerpc/powerpc/uma_machdep.c ============================================================================== --- user/jeff/numa/sys/powerpc/powerpc/uma_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/powerpc/powerpc/uma_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -95,8 +95,7 @@ uma_small_free(void *mem, vm_size_t size, u_int8_t fla (vm_offset_t)mem + PAGE_SIZE); m = PHYS_TO_VM_PAGE((vm_offset_t)mem); - m->wire_count--; + vm_page_unwire(m, PQ_NONE); vm_page_free(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); atomic_subtract_int(&hw_uma_mdpages, 1); } Modified: user/jeff/numa/sys/riscv/riscv/pmap.c ============================================================================== --- user/jeff/numa/sys/riscv/riscv/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/riscv/riscv/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1154,11 +1154,12 @@ _pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t pmap_invalidate_page(pmap, va); /* - * This is a release store so that the ordinary store unmapping + * This barrier is so that the ordinary store unmapping * the page table page is globally performed before TLB shoot- * down is begun. */ - atomic_subtract_rel_int(&vm_cnt.v_wire_count, 1); + wmb(); + VM_CNT_ADD(v_wire_count, -1); /* * Put page on a list so that it is released after @@ -1302,8 +1303,7 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str /* recurse for allocating page dir */ if (_pmap_alloc_l3(pmap, NUPDE + l1index, lockp) == NULL) { - --m->wire_count; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); return (NULL); } @@ -1388,8 +1388,7 @@ pmap_release(pmap_t pmap) pmap->pm_stats.resident_count)); m = PHYS_TO_VM_PAGE(DMAP_TO_PHYS((vm_offset_t)pmap->pm_l1)); - m->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); /* Remove pmap from the allpmaps list */ Modified: user/jeff/numa/sys/sparc64/sparc64/pmap.c ============================================================================== --- user/jeff/numa/sys/sparc64/sparc64/pmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/sparc64/sparc64/pmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1308,8 +1308,7 @@ pmap_release(pmap_t pm) while (!TAILQ_EMPTY(&obj->memq)) { m = TAILQ_FIRST(&obj->memq); m->md.pmap = NULL; - m->wire_count--; - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + vm_page_unwire(m, PQ_NONE); vm_page_free_zero(m); } VM_OBJECT_WUNLOCK(obj); Modified: user/jeff/numa/sys/sparc64/sparc64/vm_machdep.c ============================================================================== --- user/jeff/numa/sys/sparc64/sparc64/vm_machdep.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/sparc64/sparc64/vm_machdep.c Thu Feb 8 07:52:30 2018 (r329014) @@ -429,9 +429,8 @@ uma_small_free(void *mem, vm_size_t size, u_int8_t fla PMAP_STATS_INC(uma_nsmall_free); m = PHYS_TO_VM_PAGE(TLB_DIRECT_TO_PHYS((vm_offset_t)mem)); - m->wire_count--; + vm_page_unwire(m, PQ_NONE); vm_page_free(m); - atomic_subtract_int(&vm_cnt.v_wire_count, 1); } void Modified: user/jeff/numa/sys/sys/pmc.h ============================================================================== --- user/jeff/numa/sys/sys/pmc.h Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/sys/pmc.h Thu Feb 8 07:52:30 2018 (r329014) @@ -623,12 +623,12 @@ struct pmc_op_getdyneventinfo { #include -#define PMC_HASH_SIZE 1024 -#define PMC_MTXPOOL_SIZE 2048 -#define PMC_LOG_BUFFER_SIZE 4 -#define PMC_NLOGBUFFERS 1024 -#define PMC_NSAMPLES 1024 -#define PMC_CALLCHAIN_DEPTH 32 +#define PMC_HASH_SIZE 2048 +#define PMC_MTXPOOL_SIZE 4096 +#define PMC_LOG_BUFFER_SIZE 32 +#define PMC_NLOGBUFFERS 32768 +#define PMC_NSAMPLES 4096 +#define PMC_CALLCHAIN_DEPTH 64 #define PMC_SYSCTL_NAME_PREFIX "kern." PMC_MODULE_NAME "." Modified: user/jeff/numa/sys/sys/vmmeter.h ============================================================================== --- user/jeff/numa/sys/sys/vmmeter.h Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/sys/vmmeter.h Thu Feb 8 07:52:30 2018 (r329014) @@ -125,6 +125,7 @@ struct vmmeter { counter_u64_t v_vforkpages; /* (p) pages affected by vfork() */ counter_u64_t v_rforkpages; /* (p) pages affected by rfork() */ counter_u64_t v_kthreadpages; /* (p) ... and by kernel fork() */ + counter_u64_t v_wire_count; /* (p) pages wired down */ #define VM_METER_NCOUNTERS \ (offsetof(struct vmmeter, v_page_size) / sizeof(counter_u64_t)) /* @@ -139,7 +140,6 @@ struct vmmeter { u_int v_pageout_free_min; /* (c) min pages reserved for kernel */ u_int v_interrupt_free_min; /* (c) reserved pages for int code */ u_int v_free_severe; /* (c) severe page depletion point */ - u_int v_wire_count VMMETER_ALIGNED; /* (a) pages wired down */ }; #endif /* _KERNEL || _WANT_VMMETER */ @@ -156,6 +156,12 @@ extern domainset_t vm_severe_domains; #define VM_CNT_FETCH(var) counter_u64_fetch(vm_cnt.var) u_int vm_free_count(void); +static inline u_int +vm_wire_count(void) +{ + + return VM_CNT_FETCH(v_wire_count); +} /* * Return TRUE if we are under our severe low-free-pages threshold Modified: user/jeff/numa/sys/vm/swap_pager.c ============================================================================== --- user/jeff/numa/sys/vm/swap_pager.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/vm/swap_pager.c Thu Feb 8 07:52:30 2018 (r329014) @@ -209,7 +209,8 @@ swap_reserve_by_cred(vm_ooffset_t incr, struct ucred * mtx_lock(&sw_dev_mtx); r = swap_reserved + incr; if (overcommit & SWAP_RESERVE_ALLOW_NONWIRED) { - s = vm_cnt.v_page_count - vm_cnt.v_free_reserved - vm_cnt.v_wire_count; + s = vm_cnt.v_page_count - vm_cnt.v_free_reserved - + vm_wire_count(); s *= PAGE_SIZE; } else s = 0; Modified: user/jeff/numa/sys/vm/vm_glue.c ============================================================================== --- user/jeff/numa/sys/vm/vm_glue.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/vm/vm_glue.c Thu Feb 8 07:52:30 2018 (r329014) @@ -191,7 +191,7 @@ vslock(void *addr, size_t len) * Also, the sysctl code, which is the only present user * of vslock(), does a hard loop on EAGAIN. */ - if (npages + vm_cnt.v_wire_count > vm_page_max_wired) + if (npages + vm_wire_count() > vm_page_max_wired) return (EAGAIN); #endif error = vm_map_wire(&curproc->p_vmspace->vm_map, start, end, Modified: user/jeff/numa/sys/vm/vm_meter.c ============================================================================== --- user/jeff/numa/sys/vm/vm_meter.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/vm/vm_meter.c Thu Feb 8 07:52:30 2018 (r329014) @@ -96,6 +96,7 @@ struct vmmeter __exclusive_cache_line vm_cnt = { .v_vforkpages = EARLY_COUNTER, .v_rforkpages = EARLY_COUNTER, .v_kthreadpages = EARLY_COUNTER, + .v_wire_count = EARLY_COUNTER, }; static void @@ -105,7 +106,7 @@ vmcounter_startup(void) COUNTER_ARRAY_ALLOC(cnt, VM_METER_NCOUNTERS, M_WAITOK); } -SYSINIT(counter, SI_SUB_CPU, SI_ORDER_FOURTH + 1, vmcounter_startup, NULL); +SYSINIT(counter, SI_SUB_KMEM, SI_ORDER_FIRST, vmcounter_startup, NULL); SYSCTL_UINT(_vm, VM_V_FREE_MIN, v_free_min, CTLFLAG_RW, &vm_cnt.v_free_min, 0, "Minimum low-free-pages threshold"); @@ -403,7 +404,7 @@ VM_STATS_UINT(v_free_reserved, "Pages reserved for dea VM_STATS_UINT(v_free_target, "Pages desired free"); VM_STATS_UINT(v_free_min, "Minimum low-free-pages threshold"); VM_STATS_PROC(v_free_count, "Free pages", vm_free_count); -VM_STATS_UINT(v_wire_count, "Wired pages"); +VM_STATS_PROC(v_wire_count, "Wired pages", vm_wire_count); VM_STATS_PROC(v_active_count, "Active pages", vm_active_count); VM_STATS_UINT(v_inactive_target, "Desired inactive pages"); VM_STATS_PROC(v_inactive_count, "Inactive pages", vm_inactive_count); Modified: user/jeff/numa/sys/vm/vm_mmap.c ============================================================================== --- user/jeff/numa/sys/vm/vm_mmap.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/vm/vm_mmap.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1002,7 +1002,7 @@ kern_mlock(struct proc *proc, struct ucred *cred, uint return (ENOMEM); } PROC_UNLOCK(proc); - if (npages + vm_cnt.v_wire_count > vm_page_max_wired) + if (npages + vm_wire_count() > vm_page_max_wired) return (EAGAIN); #ifdef RACCT if (racct_enable) { Modified: user/jeff/numa/sys/vm/vm_page.c ============================================================================== --- user/jeff/numa/sys/vm/vm_page.c Thu Feb 8 05:18:30 2018 (r329013) +++ user/jeff/numa/sys/vm/vm_page.c Thu Feb 8 07:52:30 2018 (r329014) @@ -1848,7 +1848,7 @@ found: * The page lock is not required for wiring a page until that * page is inserted into the object. */ - atomic_add_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, 1); m->wire_count = 1; } m->act_count = 0; @@ -1857,7 +1857,7 @@ found: if (vm_page_insert_after(m, object, pindex, mpred)) { pagedaemon_wakeup(domain); if (req & VM_ALLOC_WIRED) { - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); m->wire_count = 0; } KASSERT(m->object == NULL, ("page %p has object", m)); @@ -2041,7 +2041,7 @@ found: if ((req & VM_ALLOC_SBUSY) != 0) busy_lock = VPB_SHARERS_WORD(1); if ((req & VM_ALLOC_WIRED) != 0) - atomic_add_int(&vm_cnt.v_wire_count, npages); + VM_CNT_ADD(v_wire_count, npages); if (object != NULL) { if (object->memattr != VM_MEMATTR_DEFAULT && memattr == VM_MEMATTR_DEFAULT) @@ -2059,8 +2059,7 @@ found: if (vm_page_insert_after(m, object, pindex, mpred)) { pagedaemon_wakeup(domain); if ((req & VM_ALLOC_WIRED) != 0) - atomic_subtract_int( - &vm_cnt.v_wire_count, npages); + VM_CNT_ADD(v_wire_count, -npages); KASSERT(m->object == NULL, ("page %p has object", m)); mpred = m; @@ -2186,7 +2185,7 @@ again: * The page lock is not required for wiring a page that does * not belong to an object. */ - atomic_add_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, 1); m->wire_count = 1; } /* Unmanaged pages don't use "act_count". */ @@ -3405,7 +3404,7 @@ vm_page_wire(vm_page_t m) m->queue == PQ_NONE, ("vm_page_wire: unmanaged page %p is queued", m)); vm_page_remque(m); - atomic_add_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, 1); } m->wire_count++; KASSERT(m->wire_count != 0, ("vm_page_wire: wire_count overflow m=%p", m)); @@ -3444,7 +3443,7 @@ vm_page_unwire(vm_page_t m, uint8_t queue) if (m->wire_count > 0) { m->wire_count--; if (m->wire_count == 0) { - atomic_subtract_int(&vm_cnt.v_wire_count, 1); + VM_CNT_ADD(v_wire_count, -1); if ((m->oflags & VPO_UNMANAGED) == 0 && m->object != NULL && queue != PQ_NONE) vm_page_enqueue(queue, m); @@ -4285,7 +4284,7 @@ DB_SHOW_COMMAND(page, vm_page_print_page_info) db_printf("vm_cnt.v_inactive_count: %d\n", vm_inactive_count()); db_printf("vm_cnt.v_active_count: %d\n", vm_active_count()); db_printf("vm_cnt.v_laundry_count: %d\n", vm_laundry_count()); - db_printf("vm_cnt.v_wire_count: %d\n", vm_cnt.v_wire_count); + db_printf("vm_cnt.v_wire_count: %d\n", vm_wire_count()); db_printf("vm_cnt.v_free_reserved: %d\n", vm_cnt.v_free_reserved); db_printf("vm_cnt.v_free_min: %d\n", vm_cnt.v_free_min); db_printf("vm_cnt.v_free_target: %d\n", vm_cnt.v_free_target);