From owner-svn-src-head@FreeBSD.ORG Fri Apr 24 02:53:38 2009 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A62721065673; Fri, 24 Apr 2009 02:53:38 +0000 (UTC) (envelope-from marcel@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 897CE8FC23; Fri, 24 Apr 2009 02:53:38 +0000 (UTC) (envelope-from marcel@FreeBSD.org) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id n3O2rcOc006027; Fri, 24 Apr 2009 02:53:38 GMT (envelope-from marcel@svn.freebsd.org) Received: (from marcel@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id n3O2rcbo006026; Fri, 24 Apr 2009 02:53:38 GMT (envelope-from marcel@svn.freebsd.org) Message-Id: <200904240253.n3O2rcbo006026@svn.freebsd.org> From: Marcel Moolenaar Date: Fri, 24 Apr 2009 02:53:38 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r191445 - head/sys/powerpc/booke X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Apr 2009 02:53:39 -0000 Author: marcel Date: Fri Apr 24 02:53:38 2009 New Revision: 191445 URL: http://svn.freebsd.org/changeset/base/191445 Log: Remove PTE_ISFAKE. While here remove code between "#if 0" and "#endif". Modified: head/sys/powerpc/booke/pmap.c Modified: head/sys/powerpc/booke/pmap.c ============================================================================== --- head/sys/powerpc/booke/pmap.c Thu Apr 23 22:08:44 2009 (r191444) +++ head/sys/powerpc/booke/pmap.c Fri Apr 24 02:53:38 2009 (r191445) @@ -757,27 +757,21 @@ pte_remove(mmu_t mmu, pmap_t pmap, vm_of if (pte == NULL || !PTE_ISVALID(pte)) return (0); - /* Get vm_page_t for mapped pte. */ - m = PHYS_TO_VM_PAGE(PTE_PA(pte)); - if (PTE_ISWIRED(pte)) pmap->pm_stats.wired_count--; - if (!PTE_ISFAKE(pte)) { - /* Handle managed entry. */ - if (PTE_ISMANAGED(pte)) { + /* Handle managed entry. */ + if (PTE_ISMANAGED(pte)) { + /* Get vm_page_t for mapped pte. */ + m = PHYS_TO_VM_PAGE(PTE_PA(pte)); - /* Handle modified pages. */ - if (PTE_ISMODIFIED(pte)) - vm_page_dirty(m); + if (PTE_ISMODIFIED(pte)) + vm_page_dirty(m); - /* Referenced pages. */ - if (PTE_ISREFERENCED(pte)) - vm_page_flag_set(m, PG_REFERENCED); + if (PTE_ISREFERENCED(pte)) + vm_page_flag_set(m, PG_REFERENCED); - /* Remove pv_entry from pv_list. */ - pv_remove(pmap, va, m); - } + pv_remove(pmap, va, m); } mtx_lock_spin(&tlbivax_mutex); @@ -847,8 +841,6 @@ pte_enter(mmu_t mmu, pmap_t pmap, vm_pag /* Create and insert pv entry. */ pv_insert(pmap, va, m); } - } else { - flags |= PTE_FAKE; } pmap->pm_stats.resident_count++; @@ -1297,23 +1289,7 @@ mmu_booke_kenter(mmu_t mmu, vm_offset_t KASSERT(((va >= VM_MIN_KERNEL_ADDRESS) && (va <= VM_MAX_KERNEL_ADDRESS)), ("mmu_booke_kenter: invalid va")); -#if 0 - /* assume IO mapping, set I, G bits */ - flags = (PTE_G | PTE_I | PTE_FAKE); - - /* if mapping is within system memory, do not set I, G bits */ - for (i = 0; i < totalmem_regions_sz; i++) { - if ((pa >= totalmem_regions[i].mr_start) && - (pa < (totalmem_regions[i].mr_start + - totalmem_regions[i].mr_size))) { - flags &= ~(PTE_I | PTE_G | PTE_FAKE); - break; - } - } -#else flags = 0; -#endif - flags |= (PTE_SR | PTE_SW | PTE_SX | PTE_WIRED | PTE_VALID); flags |= PTE_M; @@ -1431,14 +1407,6 @@ mmu_booke_release(mmu_t mmu, pmap_t pmap PMAP_LOCK_DESTROY(pmap); } -#if 0 -/* Not needed, kernel page tables are statically allocated. */ -void -mmu_booke_growkernel(vm_offset_t maxkvaddr) -{ -} -#endif - /* * Insert the given physical page at the specified virtual address in the * target physical map with the protection requested. If specified the page @@ -2031,18 +1999,6 @@ mmu_booke_copy_page(mmu_t mmu, vm_page_t mtx_unlock(©_page_mutex); } -#if 0 -/* - * Remove all pages from specified address space, this aids process exit - * speeds. This is much faster than mmu_booke_remove in the case of running - * down an entire address space. Only works for the current pmap. - */ -void -mmu_booke_remove_pages(pmap_t pmap) -{ -} -#endif - /* * mmu_booke_zero_page_idle zeros the specified hardware page by mapping it * into virtual memory and using bzero to clear its contents. This is intended