Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Jan 2010 15:47:33 -1000 (HST)
From:      Jeff Roberson <jroberson@jroberson.net>
To:        Kohji Okuno <okuno.kohji@jp.panasonic.com>
Cc:        attilio@freebsd.org, freebsd-current@freebsd.org
Subject:   Re: Bug about sched_4bsd?
Message-ID:  <alpine.BSF.2.00.1001181544130.1027@desktop>
In-Reply-To: <20100119.103858.29593248145858473.okuno.kohji@jp.panasonic.com>
References:  <3bbf2fe11001171858o4568fe38l9b2db54ec9856b50@mail.gmail.com> <20100118.155352.59640143160034670.okuno.kohji@jp.panasonic.com> <3bbf2fe11001172306m69ff6544i3aaf05e2540136e1@mail.gmail.com> <20100119.103858.29593248145858473.okuno.kohji@jp.panasonic.com>

next in thread | previous in thread | raw e-mail | index | archive | help
  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--2547152148-1844873231-1263865655=:1027
Content-Type: TEXT/PLAIN; charset=iso-8859-1; format=flowed
Content-Transfer-Encoding: 8BIT

On Tue, 19 Jan 2010, Kohji Okuno wrote:

> Hello, Attilio,
>
> I think setpriority() can set priority to sleeping threads.
> Is it really safe?

I agree, in this code path maybe_resched is not properly locking 
curthread.  curthread will be sched_lock and the sleeping thread will be a 
sleepq lock.  I believe that since &sched_lock is ordered after container 
locks it would be sufficient to compare the two td_lock pointers and 
acquire sched_lock if they are not equal.  Someone should look at other 
maybe_resched callers or add an assert that curthread->td_lock is always 
&sched_lock in this function.

Thanks,
Jeff

>
> Thank you,
> Kohji Okuno
>
>> 2010/1/18 Kohji Okuno <okuno.kohji@jp.panasonic.com>:
>>> Hello,
>>>
>>> Thank you, Attilio.
>>> I checked your patch. I think that your patch is better.
>>> I tested the patch quickly, and I think it's OK.
>>> # This probrem does not occur easily :-<
>>>
>>>
>>> What do you think about maybe_resched()?
>>> I have never experienced about maybe_resched(), but I think that the
>>> race condition may occur.
>>>
>>> <<Back Trace>>
>>> sched_4bsd.c:     maybe_resched()
>>> sched_4bsd.c:     resetpriority_thread()
>>> sched_4bsd.c:     sched_nice()                  get thread_lock(td)
>>> kern_resource.c:  donice()
>>> kern_resource.c:  setpriority()                 get PROC_LOCK()
>>>
>>> static void
>>> maybe_resched(struct thread *td)
>>> {
>>>        THREAD_LOCK_ASSERT(td, MA_OWNED);
>>>        if (td->td_priority < curthread->td_priority)
>>>                curthread->td_flags |= TDF_NEEDRESCHED;
>>> }
>>>
>>> I think, when td->td_lock is not &sched_lock, curthread->td_lock is
>>> not locked in maybe_resched().
>>
>> I didn't look closely to the maybe_resched() callers but I think it is
>> ok. The thread_lock() function works in a way that the callers don't
>> need to know which container lock is present in a particular moment,
>> there is always a guarantee that the contenders will spin if the lock
>> on the struct can't be held.
>> In the case you outlined something very particular was happening.
>> Basically, we get &sched_lock but sched_lock was not the lock present
>> on td_lock. That means all the other paths willing to access to
>> td_lock for that thread (via thread_lock()) were allowed to do that
>> even if we wanted to keep the critical path closed.
>>
>> Attilio
>>
>>
>> --
>> Peace can only be achieved by understanding - A. Einstein
>
--2547152148-1844873231-1263865655=:1027--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1001181544130.1027>