Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 Jul 2014 16:17:29 -0500
From:      Alan Cox <alc@rice.edu>
To:        Andreas Tobler <andreast@FreeBSD.org>, Alan Cox <alc@FreeBSD.org>, src-committers@FreeBSD.org, svn-src-all@FreeBSD.org, svn-src-head@FreeBSD.org
Subject:   Re: svn commit: r269134 - head/sys/vm
Message-ID:  <53D960E9.50905@rice.edu>
In-Reply-To: <53D95265.9010406@FreeBSD.org>
References:  <201407261810.s6QIAIIj049439@svn.freebsd.org> <53D9406A.3060101@FreeBSD.org> <53D94BAF.8050102@rice.edu> <53D94D73.3050808@rice.edu> <53D95265.9010406@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 07/30/2014 15:15, Andreas Tobler wrote:
> On 30.07.14 21:54, Alan Cox wrote:
>> On 07/30/2014 14:46, Alan Cox wrote:
>>> On 07/30/2014 13:58, Andreas Tobler wrote:
>>>> Hi Alan,
>>>>
>>>> On 26.07.14 20:10, Alan Cox wrote:
>>>>> Author: alc
>>>>> Date: Sat Jul 26 18:10:18 2014
>>>>> New Revision: 269134
>>>>> URL: http://svnweb.freebsd.org/changeset/base/269134
>>>>>
>>>>> Log:
>>>>>     When unwiring a region of an address space, do not assume that
>>>>> the
>>>>>     underlying physical pages are mapped by the pmap.  If, for
>>>>> example, the
>>>>>     application has performed an mprotect(..., PROT_NONE) on any part
>>>>> of the
>>>>>     wired region, then those pages will no longer be mapped by the
>>>>> pmap.
>>>>>     So, using the pmap to lookup the wired pages in order to
>>>>> unwire them
>>>>>     doesn't always work, and when it doesn't work wired pages are
>>>>> leaked.
>>>>>
>>>>>     To avoid the leak, introduce and use a new function
>>>>> vm_object_unwire()
>>>>>     that locates the wired pages by traversing the object and its
>>>>> backing
>>>>>     objects.
>>>>>
>>>>>     At the same time, switch from using pmap_change_wiring() to the
>>>>> recently
>>>>>     introduced function pmap_unwire() for unwiring the region's
>>>>> mappings.
>>>>>     pmap_unwire() is faster, because it operates a range of virtual
>>>>> addresses
>>>>>     rather than a single virtual page at a time.  Moreover, by
>>>>> operating on
>>>>>     a range, it is superpage friendly.  It doesn't waste time
>>>>> performing
>>>>>     unnecessary demotions.
>>>>>
>>>>>     Reported by:    markj
>>>>>     Reviewed by:    kib
>>>>>     Tested by:    pho, jmg (arm)
>>>>>     Sponsored by:    EMC / Isilon Storage Division
>>>> This commit brings my 32- and 64-bit PowerMac's into panic.
>>>> Unfortunately I'm not able to give you a backtrace in the form of a
>>>> textdump nor of a core dump.
>>>>
>>>> The only thing I have is this picture:
>>>>
>>>> http://people.freebsd.org/~andreast/r269134_panic.jpg
>>>>
>>>> Exactly this revision gives a panic and breaks the textdump/coredump
>>>> facility.
>>>>
>>>> How can I help debugging?
>>>>
>>> It appears to me that moea64_pvo_enter() had a pre-existing bug that
>>> got
>>> tickled by this change.  Specifically, moea64_pvo_enter() doesn't set
>>> the PVO_WIRED flag when an unwired mapping already exists.  It just
>>> returns with the mapping still in an unwired state.  Consequently, when
>>> pmap_unwire() finally runs, it doesn't find a wired mapping.
>>>
>>> Try this:
>>>
>>> Index: powerpc/aim/mmu_oea64.c
>>> ===================================================================
>>> --- powerpc/aim/mmu_oea64.c     (revision 269127)
>>> +++ powerpc/aim/mmu_oea64.c     (working copy)
>>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
>>>                  if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) {
>>>                          if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN)
>>> == pa &&
>>>                              (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC |
>>> LPTE_PP))
>>> -                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
>>> +                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) &&
>>> +                           ((pvo->pvo_vaddr ^ flags) & PVO_WIRED)) {
>>>                                  if (!(pvo->pvo_pte.lpte.pte_hi &
>>> LPTE_VALID)) {
>>>                                          /* Re-insert if spilled */
>>>                                          i = MOEA64_PTE_INSERT(mmu,
>>> ptegidx,
>>>
>>
>> The new conditional test needs to be inverted.  Try this instead:
>>
>> Index: powerpc/aim/mmu_oea64.c
>> ===================================================================
>> --- powerpc/aim/mmu_oea64.c     (revision 269127)
>> +++ powerpc/aim/mmu_oea64.c     (working copy)
>> @@ -2274,7 +2274,8 @@ moea64_pvo_enter(mmu_t mmu, pmap_t pm, uma_zone_t
>>                  if (pvo->pvo_pmap == pm && PVO_VADDR(pvo) == va) {
>>                          if ((pvo->pvo_pte.lpte.pte_lo & LPTE_RPGN)
>> == pa &&
>>                              (pvo->pvo_pte.lpte.pte_lo & (LPTE_NOEXEC |
>> LPTE_PP))
>> -                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP))) {
>> +                           == (pte_lo & (LPTE_NOEXEC | LPTE_PP)) &&
>> +                           ((pvo->pvo_vaddr ^ flags) & PVO_WIRED) ==
>> 0) {
>>                                  if (!(pvo->pvo_pte.lpte.pte_hi &
>> LPTE_VALID)) {
>>                                          /* Re-insert if spilled */
>>                                          i = MOEA64_PTE_INSERT(mmu,
>> ptegidx,
>>
>
>
> The panic stays, but the message is different:
>
> panic: moea64_pvo_to_pte: pvo 0x10147ea0 has invalid pte 0xb341180 in
> moea64_pteg_table but valid in pvo.
>

My attempted fix is doing something else wrong.  Do you have a stack trace?





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53D960E9.50905>