From owner-svn-src-stable-10@FreeBSD.ORG Fri Jan 2 17:45:55 2015 Return-Path: Delivered-To: svn-src-stable-10@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 108ABA67; Fri, 2 Jan 2015 17:45:55 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EF1DC664D6; Fri, 2 Jan 2015 17:45:54 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.9/8.14.9) with ESMTP id t02Hjsm1034453; Fri, 2 Jan 2015 17:45:54 GMT (envelope-from alc@FreeBSD.org) Received: (from alc@localhost) by svn.freebsd.org (8.14.9/8.14.9/Submit) id t02Hjrv6034434; Fri, 2 Jan 2015 17:45:53 GMT (envelope-from alc@FreeBSD.org) Message-Id: <201501021745.t02Hjrv6034434@svn.freebsd.org> X-Authentication-Warning: svn.freebsd.org: alc set sender to alc@FreeBSD.org using -f From: Alan Cox Date: Fri, 2 Jan 2015 17:45:53 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-10@freebsd.org Subject: svn commit: r276546 - in stable/10/sys: amd64/amd64 arm/arm i386/i386 i386/include vm X-SVN-Group: stable-10 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable-10@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SVN commit messages for only the 10-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Jan 2015 17:45:55 -0000 Author: alc Date: Fri Jan 2 17:45:52 2015 New Revision: 276546 URL: https://svnweb.freebsd.org/changeset/base/276546 Log: MFC r273701, r274556 By the time that pmap_init() runs, vm_phys_segs[] has been initialized. Obtaining the end of memory address from vm_phys_segs[] is a little easier than obtaining it from phys_avail[]. Enable the use of VM_PHYSSEG_SPARSE on amd64 and i386, making it the default on i386 PAE. (The use of VM_PHYSSEG_SPARSE on i386 PAE saves us some precious kernel virtual address space that would have been wasted on unused vm_page structures.) Modified: stable/10/sys/amd64/amd64/pmap.c stable/10/sys/arm/arm/pmap-v6.c stable/10/sys/i386/i386/pmap.c stable/10/sys/i386/include/vmparam.h stable/10/sys/vm/vm_page.c stable/10/sys/vm/vm_phys.c stable/10/sys/vm/vm_phys.h Directory Properties: stable/10/ (props changed) Modified: stable/10/sys/amd64/amd64/pmap.c ============================================================================== --- stable/10/sys/amd64/amd64/pmap.c Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/amd64/amd64/pmap.c Fri Jan 2 17:45:52 2015 (r276546) @@ -130,6 +130,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -836,6 +837,15 @@ pmap_bootstrap(vm_paddr_t *firstaddr) */ create_pagetables(firstaddr); + /* + * Add a physical memory segment (vm_phys_seg) corresponding to the + * preallocated kernel page table pages so that vm_page structures + * representing these pages will be created. The vm_page structures + * are required for promotion of the corresponding kernel virtual + * addresses to superpage mappings. + */ + vm_phys_add_seg(KPTphys, KPTphys + ptoa(nkpt)); + virtual_avail = (vm_offset_t) KERNBASE + *firstaddr; virtual_avail = pmap_kmem_choose(virtual_avail); @@ -1060,8 +1070,7 @@ pmap_init(void) /* * Calculate the size of the pv head table for superpages. */ - for (i = 0; phys_avail[i + 1]; i += 2); - pv_npg = round_2mpage(phys_avail[(i - 2) + 1]) / NBPDR; + pv_npg = howmany(vm_phys_segs[vm_phys_nsegs - 1].end, NBPDR); /* * Allocate memory for the pv head table for superpages. Modified: stable/10/sys/arm/arm/pmap-v6.c ============================================================================== --- stable/10/sys/arm/arm/pmap-v6.c Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/arm/arm/pmap-v6.c Fri Jan 2 17:45:52 2015 (r276546) @@ -172,6 +172,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include @@ -1342,9 +1343,10 @@ pmap_init(void) /* * Calculate the size of the pv head table for superpages. + * Handle the possibility that "vm_phys_segs[...].end" is zero. */ - for (i = 0; phys_avail[i + 1]; i += 2); - pv_npg = round_1mpage(phys_avail[(i - 2) + 1]) / NBPDR; + pv_npg = trunc_1mpage(vm_phys_segs[vm_phys_nsegs - 1].end - + PAGE_SIZE) / NBPDR + 1; /* * Allocate memory for the pv head table for superpages. Modified: stable/10/sys/i386/i386/pmap.c ============================================================================== --- stable/10/sys/i386/i386/pmap.c Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/i386/i386/pmap.c Fri Jan 2 17:45:52 2015 (r276546) @@ -133,6 +133,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -374,6 +375,15 @@ pmap_bootstrap(vm_paddr_t firstaddr) int i; /* + * Add a physical memory segment (vm_phys_seg) corresponding to the + * preallocated kernel page table pages so that vm_page structures + * representing these pages will be created. The vm_page structures + * are required for promotion of the corresponding kernel virtual + * addresses to superpage mappings. + */ + vm_phys_add_seg(KPTphys, KPTphys + ptoa(nkpt)); + + /* * Initialize the first available kernel virtual address. However, * using "firstaddr" may waste a few pages of the kernel virtual * address space, because locore may not have mapped every physical @@ -778,9 +788,10 @@ pmap_init(void) /* * Calculate the size of the pv head table for superpages. + * Handle the possibility that "vm_phys_segs[...].end" is zero. */ - for (i = 0; phys_avail[i + 1]; i += 2); - pv_npg = round_4mpage(phys_avail[(i - 2) + 1]) / NBPDR; + pv_npg = trunc_4mpage(vm_phys_segs[vm_phys_nsegs - 1].end - + PAGE_SIZE) / NBPDR + 1; /* * Allocate memory for the pv head table for superpages. Modified: stable/10/sys/i386/include/vmparam.h ============================================================================== --- stable/10/sys/i386/include/vmparam.h Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/i386/include/vmparam.h Fri Jan 2 17:45:52 2015 (r276546) @@ -64,9 +64,15 @@ #endif /* - * The physical address space is densely populated. + * Choose between DENSE and SPARSE based on whether lower execution time or + * lower kernel address space consumption is desired. Under PAE, kernel + * address space is often in short supply. */ +#ifdef PAE +#define VM_PHYSSEG_SPARSE +#else #define VM_PHYSSEG_DENSE +#endif /* * The number of PHYSSEG entries must be one greater than the number Modified: stable/10/sys/vm/vm_page.c ============================================================================== --- stable/10/sys/vm/vm_page.c Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/vm/vm_page.c Fri Jan 2 17:45:52 2015 (r276546) @@ -307,9 +307,23 @@ vm_page_startup(vm_offset_t vaddr) phys_avail[i + 1] = trunc_page(phys_avail[i + 1]); } +#ifdef XEN + /* + * There is no obvious reason why i386 PV Xen needs vm_page structs + * created for these pseudo-physical addresses. XXX + */ + vm_phys_add_seg(0, phys_avail[0]); +#endif + low_water = phys_avail[0]; high_water = phys_avail[1]; + for (i = 0; i < vm_phys_nsegs; i++) { + if (vm_phys_segs[i].start < low_water) + low_water = vm_phys_segs[i].start; + if (vm_phys_segs[i].end > high_water) + high_water = vm_phys_segs[i].end; + } for (i = 0; phys_avail[i + 1]; i += 2) { vm_paddr_t size = phys_avail[i + 1] - phys_avail[i]; @@ -323,10 +337,6 @@ vm_page_startup(vm_offset_t vaddr) high_water = phys_avail[i + 1]; } -#ifdef XEN - low_water = 0; -#endif - end = phys_avail[biggestone+1]; /* @@ -394,6 +404,10 @@ vm_page_startup(vm_offset_t vaddr) first_page = low_water / PAGE_SIZE; #ifdef VM_PHYSSEG_SPARSE page_range = 0; + for (i = 0; i < vm_phys_nsegs; i++) { + page_range += atop(vm_phys_segs[i].end - + vm_phys_segs[i].start); + } for (i = 0; phys_avail[i + 1] != 0; i += 2) page_range += atop(phys_avail[i + 1] - phys_avail[i]); #elif defined(VM_PHYSSEG_DENSE) @@ -436,6 +450,13 @@ vm_page_startup(vm_offset_t vaddr) phys_avail[biggestone + 1] = new_end; /* + * Add physical memory segments corresponding to the available + * physical pages. + */ + for (i = 0; phys_avail[i + 1] != 0; i += 2) + vm_phys_add_seg(phys_avail[i], phys_avail[i + 1]); + + /* * Clear all of the page structures */ bzero((caddr_t) vm_page_array, page_range * sizeof(struct vm_page)); Modified: stable/10/sys/vm/vm_phys.c ============================================================================== --- stable/10/sys/vm/vm_phys.c Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/vm/vm_phys.c Fri Jan 2 17:45:52 2015 (r276546) @@ -246,29 +246,19 @@ static void _vm_phys_create_seg(vm_paddr_t start, vm_paddr_t end, int flind, int domain) { struct vm_phys_seg *seg; -#ifdef VM_PHYSSEG_SPARSE - long pages; - int segind; - pages = 0; - for (segind = 0; segind < vm_phys_nsegs; segind++) { - seg = &vm_phys_segs[segind]; - pages += atop(seg->end - seg->start); - } -#endif KASSERT(vm_phys_nsegs < VM_PHYSSEG_MAX, ("vm_phys_create_seg: increase VM_PHYSSEG_MAX")); KASSERT(domain < vm_ndomains, ("vm_phys_create_seg: invalid domain provided")); seg = &vm_phys_segs[vm_phys_nsegs++]; + while (seg > vm_phys_segs && (seg - 1)->start >= end) { + *seg = *(seg - 1); + seg--; + } seg->start = start; seg->end = end; seg->domain = domain; -#ifdef VM_PHYSSEG_SPARSE - seg->first_page = &vm_page_array[pages]; -#else - seg->first_page = PHYS_TO_VM_PAGE(start); -#endif seg->free_queues = &vm_phys_free_queues[domain][flind]; } @@ -302,47 +292,68 @@ vm_phys_create_seg(vm_paddr_t start, vm_ } /* - * Initialize the physical memory allocator. + * Add a physical memory segment. */ void -vm_phys_init(void) +vm_phys_add_seg(vm_paddr_t start, vm_paddr_t end) { - struct vm_freelist *fl; - int dom, flind, i, oind, pind; - for (i = 0; phys_avail[i + 1] != 0; i += 2) { + KASSERT((start & PAGE_MASK) == 0, + ("vm_phys_define_seg: start is not page aligned")); + KASSERT((end & PAGE_MASK) == 0, + ("vm_phys_define_seg: end is not page aligned")); #ifdef VM_FREELIST_ISADMA - if (phys_avail[i] < 16777216) { - if (phys_avail[i + 1] > 16777216) { - vm_phys_create_seg(phys_avail[i], 16777216, - VM_FREELIST_ISADMA); - vm_phys_create_seg(16777216, phys_avail[i + 1], - VM_FREELIST_DEFAULT); - } else { - vm_phys_create_seg(phys_avail[i], - phys_avail[i + 1], VM_FREELIST_ISADMA); - } - if (VM_FREELIST_ISADMA >= vm_nfreelists) - vm_nfreelists = VM_FREELIST_ISADMA + 1; + if (start < 16777216) { + if (end > 16777216) { + vm_phys_create_seg(start, 16777216, + VM_FREELIST_ISADMA); + vm_phys_create_seg(16777216, end, VM_FREELIST_DEFAULT); } else + vm_phys_create_seg(start, end, VM_FREELIST_ISADMA); + if (VM_FREELIST_ISADMA >= vm_nfreelists) + vm_nfreelists = VM_FREELIST_ISADMA + 1; + } else #endif #ifdef VM_FREELIST_HIGHMEM - if (phys_avail[i + 1] > VM_HIGHMEM_ADDRESS) { - if (phys_avail[i] < VM_HIGHMEM_ADDRESS) { - vm_phys_create_seg(phys_avail[i], - VM_HIGHMEM_ADDRESS, VM_FREELIST_DEFAULT); - vm_phys_create_seg(VM_HIGHMEM_ADDRESS, - phys_avail[i + 1], VM_FREELIST_HIGHMEM); - } else { - vm_phys_create_seg(phys_avail[i], - phys_avail[i + 1], VM_FREELIST_HIGHMEM); - } - if (VM_FREELIST_HIGHMEM >= vm_nfreelists) - vm_nfreelists = VM_FREELIST_HIGHMEM + 1; + if (end > VM_HIGHMEM_ADDRESS) { + if (start < VM_HIGHMEM_ADDRESS) { + vm_phys_create_seg(start, VM_HIGHMEM_ADDRESS, + VM_FREELIST_DEFAULT); + vm_phys_create_seg(VM_HIGHMEM_ADDRESS, end, + VM_FREELIST_HIGHMEM); } else + vm_phys_create_seg(start, end, VM_FREELIST_HIGHMEM); + if (VM_FREELIST_HIGHMEM >= vm_nfreelists) + vm_nfreelists = VM_FREELIST_HIGHMEM + 1; + } else +#endif + vm_phys_create_seg(start, end, VM_FREELIST_DEFAULT); +} + +/* + * Initialize the physical memory allocator. + */ +void +vm_phys_init(void) +{ + struct vm_freelist *fl; + struct vm_phys_seg *seg; +#ifdef VM_PHYSSEG_SPARSE + long pages; +#endif + int dom, flind, oind, pind, segind; + +#ifdef VM_PHYSSEG_SPARSE + pages = 0; +#endif + for (segind = 0; segind < vm_phys_nsegs; segind++) { + seg = &vm_phys_segs[segind]; +#ifdef VM_PHYSSEG_SPARSE + seg->first_page = &vm_page_array[pages]; + pages += atop(seg->end - seg->start); +#else + seg->first_page = PHYS_TO_VM_PAGE(seg->start); #endif - vm_phys_create_seg(phys_avail[i], phys_avail[i + 1], - VM_FREELIST_DEFAULT); } for (dom = 0; dom < vm_ndomains; dom++) { for (flind = 0; flind < vm_nfreelists; flind++) { Modified: stable/10/sys/vm/vm_phys.h ============================================================================== --- stable/10/sys/vm/vm_phys.h Fri Jan 2 17:36:07 2015 (r276545) +++ stable/10/sys/vm/vm_phys.h Fri Jan 2 17:45:52 2015 (r276546) @@ -69,6 +69,7 @@ extern int vm_phys_nsegs; * The following functions are only to be used by the virtual memory system. */ void vm_phys_add_page(vm_paddr_t pa); +void vm_phys_add_seg(vm_paddr_t start, vm_paddr_t end); vm_page_t vm_phys_alloc_contig(u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary); vm_page_t vm_phys_alloc_freelist_pages(int flind, int pool, int order);