From owner-svn-src-all@freebsd.org Thu Nov 2 14:08:39 2017 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CCA67E5AA49; Thu, 2 Nov 2017 14:08:39 +0000 (UTC) (envelope-from skra@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A4EDA7FD35; Thu, 2 Nov 2017 14:08:39 +0000 (UTC) (envelope-from skra@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id vA2E8c5W071659; Thu, 2 Nov 2017 14:08:38 GMT (envelope-from skra@FreeBSD.org) Received: (from skra@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id vA2E8caf071658; Thu, 2 Nov 2017 14:08:38 GMT (envelope-from skra@FreeBSD.org) Message-Id: <201711021408.vA2E8caf071658@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: skra set sender to skra@FreeBSD.org using -f From: Svatopluk Kraus Date: Thu, 2 Nov 2017 14:08:38 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r325321 - head/sys/arm/arm X-SVN-Group: head X-SVN-Commit-Author: skra X-SVN-Commit-Paths: head/sys/arm/arm X-SVN-Commit-Revision: 325321 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Nov 2017 14:08:39 -0000 Author: skra Date: Thu Nov 2 14:08:38 2017 New Revision: 325321 URL: https://svnweb.freebsd.org/changeset/base/325321 Log: Take into account race conditions in case of accessed or modified bit emulation in fast path of data/prefetch abort common routine. Process these bits only if related page table entries are consistent with provided abort info. In case of inconsistency, do nothing and let processor to signal new abort if still needed. The mapping related to an abort may be a subject of change concurrently. The situation is more evident on multicore machines. Mapping may be removed on one core while being used on another one before TLB flush happened. Memory swapping process may be an example. Or, two or more aborts may be signaled for the same page on more cores concurrently. While an abort on one core may cause a promotion of related mapping, an abort on another core may be inconsistent then as related mapping was promoted. A question is how much real the issue may be on single core machine. However, it's better to play safe even for these machines. This change may solve some "PT2MAP abort" panics reported rarely. The revision of pmap_fault() was initiated thanks to stack backtrace provided by Bob Prohaska (fbsd at www.zefox.net). While here, INVARIANTS block was changed. The previous check had iffy value as only one entry from many was checked from L2 page table. Reviewed by: mmel MFC after: 3 weeks Modified: head/sys/arm/arm/pmap-v6.c Modified: head/sys/arm/arm/pmap-v6.c ============================================================================== --- head/sys/arm/arm/pmap-v6.c Thu Nov 2 13:49:08 2017 (r325320) +++ head/sys/arm/arm/pmap-v6.c Thu Nov 2 14:08:38 2017 (r325321) @@ -6404,6 +6404,22 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, */ PMAP_LOCK(pmap); +#ifdef INVARIANTS + pte1 = pte1_load(pmap_pte1(pmap, far)); + if (pte1_is_link(pte1)) { + /* + * Check in advance that associated L2 page table is mapped into + * PT2MAP space. Note that faulty access to not mapped L2 page + * table is caught in more general check above where "far" is + * checked that it does not lay in PT2MAP space. Note also that + * L1 page table and PT2TAB always exist and are mapped. + */ + pte2 = pt2tab_load(pmap_pt2tab_entry(pmap, far)); + if (!pte2_is_valid(pte2)) + panic("%s: missing L2 page table (%p, %#x)", + __func__, pmap, far); + } +#endif #ifdef SMP /* * Special treatment is due to break-before-make approach done when @@ -6424,10 +6440,23 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, * for aborts from user mode. */ if (idx == FAULT_ACCESS_L2) { - pte2p = pt2map_entry(far); - pte2 = pte2_load(pte2p); - if (pte2_is_valid(pte2)) { - pte2_store(pte2p, pte2 | PTE2_A); + pte1 = pte1_load(pmap_pte1(pmap, far)); + if (pte1_is_link(pte1)) { + /* L2 page table should exist and be mapped. */ + pte2p = pt2map_entry(far); + pte2 = pte2_load(pte2p); + if (pte2_is_valid(pte2)) { + pte2_store(pte2p, pte2 | PTE2_A); + PMAP_UNLOCK(pmap); + return (KERN_SUCCESS); + } + } else { + /* + * We got L2 access fault but PTE1 is not a link. + * Probably some race happened, do nothing. + */ + CTR3(KTR_PMAP, "%s: FAULT_ACCESS_L2 - pmap %#x far %#x", + __func__, pmap, far); PMAP_UNLOCK(pmap); return (KERN_SUCCESS); } @@ -6439,6 +6468,15 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, pte1_store(pte1p, pte1 | PTE1_A); PMAP_UNLOCK(pmap); return (KERN_SUCCESS); + } else { + /* + * We got L1 access fault but PTE1 is not section + * mapping. Probably some race happened, do nothing. + */ + CTR3(KTR_PMAP, "%s: FAULT_ACCESS_L1 - pmap %#x far %#x", + __func__, pmap, far); + PMAP_UNLOCK(pmap); + return (KERN_SUCCESS); } } @@ -6451,12 +6489,25 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, * for aborts from user mode. */ if ((fsr & FSR_WNR) && (idx == FAULT_PERM_L2)) { - pte2p = pt2map_entry(far); - pte2 = pte2_load(pte2p); - if (pte2_is_valid(pte2) && !(pte2 & PTE2_RO) && - (pte2 & PTE2_NM)) { - pte2_store(pte2p, pte2 & ~PTE2_NM); - tlb_flush(trunc_page(far)); + pte1 = pte1_load(pmap_pte1(pmap, far)); + if (pte1_is_link(pte1)) { + /* L2 page table should exist and be mapped. */ + pte2p = pt2map_entry(far); + pte2 = pte2_load(pte2p); + if (pte2_is_valid(pte2) && !(pte2 & PTE2_RO) && + (pte2 & PTE2_NM)) { + pte2_store(pte2p, pte2 & ~PTE2_NM); + tlb_flush(trunc_page(far)); + PMAP_UNLOCK(pmap); + return (KERN_SUCCESS); + } + } else { + /* + * We got L2 permission fault but PTE1 is not a link. + * Probably some race happened, do nothing. + */ + CTR3(KTR_PMAP, "%s: FAULT_PERM_L2 - pmap %#x far %#x", + __func__, pmap, far); PMAP_UNLOCK(pmap); return (KERN_SUCCESS); } @@ -6464,10 +6515,20 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, if ((fsr & FSR_WNR) && (idx == FAULT_PERM_L1)) { pte1p = pmap_pte1(pmap, far); pte1 = pte1_load(pte1p); - if (pte1_is_section(pte1) && !(pte1 & PTE1_RO) && - (pte1 & PTE1_NM)) { - pte1_store(pte1p, pte1 & ~PTE1_NM); - tlb_flush(pte1_trunc(far)); + if (pte1_is_section(pte1)) { + if (!(pte1 & PTE1_RO) && (pte1 & PTE1_NM)) { + pte1_store(pte1p, pte1 & ~PTE1_NM); + tlb_flush(pte1_trunc(far)); + PMAP_UNLOCK(pmap); + return (KERN_SUCCESS); + } + } else { + /* + * We got L1 permission fault but PTE1 is not section + * mapping. Probably some race happened, do nothing. + */ + CTR3(KTR_PMAP, "%s: FAULT_PERM_L1 - pmap %#x far %#x", + __func__, pmap, far); PMAP_UNLOCK(pmap); return (KERN_SUCCESS); } @@ -6478,33 +6539,6 @@ pmap_fault(pmap_t pmap, vm_offset_t far, uint32_t fsr, * modify bits aborts, could be moved to ASM. Now we are * starting to deal with not fast aborts. */ - -#ifdef INVARIANTS - /* - * Read an entry in PT2TAB associated with both pmap and far. - * It's safe because PT2TAB is always mapped. - */ - pte2 = pt2tab_load(pmap_pt2tab_entry(pmap, far)); - if (pte2_is_valid(pte2)) { - /* - * Now, when we know that L2 page table is allocated, - * we can use PT2MAP to get L2 page table entry. - */ - pte2 = pte2_load(pt2map_entry(far)); - if (pte2_is_valid(pte2)) { - /* - * If L2 page table entry is valid, make sure that - * L1 page table entry is valid too. Note that we - * leave L2 page entries untouched when promoted. - */ - pte1 = pte1_load(pmap_pte1(pmap, far)); - if (!pte1_is_valid(pte1)) { - panic("%s: missing L1 page entry (%p, %#x)", - __func__, pmap, far); - } - } - } -#endif PMAP_UNLOCK(pmap); return (KERN_FAILURE); }