Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 21 Sep 2016 09:11:24 +0200
From:      Julien Charbon <jch@freebsd.org>
To:        Slawa Olhovchenkov <slw@zxy.spb.ru>
Cc:        Konstantin Belousov <kostikbel@gmail.com>, freebsd-stable@FreeBSD.org, hiren panchasara <hiren@strugglingcoder.info>
Subject:   Re: 11.0 stuck on high network load
Message-ID:  <f644cd52-4377-aa90-123a-3a2887972bbc@freebsd.org>
In-Reply-To: <20160920202633.GQ2840@zxy.spb.ru>
References:  <20160905014612.GA42393@strugglingcoder.info> <20160914213503.GJ2840@zxy.spb.ru> <20160915085938.GN38409@kib.kiev.ua> <20160915090633.GS2840@zxy.spb.ru> <20160916181839.GC2960@zxy.spb.ru> <20160916183053.GL9397@strugglingcoder.info> <20160916190330.GG2840@zxy.spb.ru> <78cbcdc9-f565-1046-c157-2ddd8fcccc62@freebsd.org> <20160919204328.GN2840@zxy.spb.ru> <8ba75d6e-4f01-895e-0aed-53c6c6692cb9@freebsd.org> <20160920202633.GQ2840@zxy.spb.ru>

next in thread | previous in thread | raw e-mail | index | archive | help

 Hi Slawa,

On 9/20/16 10:26 PM, Slawa Olhovchenkov wrote:
> On Tue, Sep 20, 2016 at 10:00:25PM +0200, Julien Charbon wrote:
>> On 9/19/16 10:43 PM, Slawa Olhovchenkov wrote:
>>> On Mon, Sep 19, 2016 at 10:32:13PM +0200, Julien Charbon wrote:
>>>>
>>>>> @ CPU_CLK_UNHALTED_CORE [4653445 samples]
>>>>>
>>>>> 51.86%  [2413083]  lock_delay @ /boot/kernel.VSTREAM/kernel
>>>>>  100.0%  [2413083]   __rw_wlock_hard
>>>>>   100.0%  [2413083]    tcp_tw_2msl_scan
>>>>>    99.99%  [2412958]     pfslowtimo
>>>>>     100.0%  [2412958]      softclock_call_cc
>>>>>      100.0%  [2412958]       softclock
>>>>>       100.0%  [2412958]        intr_event_execute_handlers
>>>>>        100.0%  [2412958]         ithread_loop
>>>>>         100.0%  [2412958]          fork_exit
>>>>>    00.01%  [125]         tcp_twstart
>>>>>     100.0%  [125]          tcp_do_segment
>>>>>      100.0%  [125]           tcp_input
>>>>>       100.0%  [125]            ip_input
>>>>>        100.0%  [125]             swi_net
>>>>>         100.0%  [125]              intr_event_execute_handlers
>>>>>          100.0%  [125]               ithread_loop
>>>>>           100.0%  [125]                fork_exit
>>>>
>>>>  The only write lock tcp_tw_2msl_scan() tries to get is a
>>>> INP_WLOCK(inp).  Thus here, tcp_tw_2msl_scan() seems to be stuck
>>>> spinning on INP_WLOCK (or pfslowtimo() is going crazy and calls
>>>> tcp_tw_2msl_scan() at high rate but this will be quite unexpected).
>>>>
>>>>  Thus my hypothesis is that something is holding the INP_WLOCK and not
>>>> releasing it, and tcp_tw_2msl_scan() is spinning on it.
>>>>
>>>>  If you can, could you compile the kernel with below options:
>>>>
>>>> options        DDB                     # Support DDB.
>>>> options        DEADLKRES               # Enable the deadlock resolver
>>>> options        INVARIANTS              # Enable calls of extra sanity
>>>> checking
>>>> options        INVARIANT_SUPPORT       # Extra sanity checks of internal
>>>> structures, required by INVARIANTS
>>>> options        WITNESS                 # Enable checks to detect
>>>> deadlocks and cycles
>>>> options        WITNESS_SKIPSPIN        # Don't run witness on spinlocks
>>>> for speed
>>>
>>> Currently this host run with 100% CPU load (on all cores), i.e.
>>> enabling WITNESS will be significant drop performance.
>>> Can I use only some subset of options?
>>>
>>> Also, I can some troubles to DDB enter in this case.
>>> May be kgdb will be success (not tryed yet)?
>>
>>  If these kernel options will certainly slow down your kernel, they also
>> might found the root cause of your issue before reaching the point where
>> you have 100% cpu load on all cores (thanks to INVARIANTS).  I would
>> suggest:
> 
> Hmmm, may be I am not clarified.
> This host run at peak hours with 100% CPU load as normal operation,
> this is for servering 2x10G, this is CPU load not result of lock
> issuse, this is not us case. And this is because I am fear to enable
> WITNESS -- I am fear drop performance.
> 
> This lock issuse happen irregulary and may be caused by other issuse
> (nginx crashed). In this case about 1/3 cores have 100% cpu load,
> perhaps by this lock -- I am can trace only from one core and need
> more then hour for this (may be on other cores different trace, I
> can't guaranted anything).

 I see, especially if you are running in production WITNESS might indeed
be not practical for you.  In this case, I would suggest before doing
WITNESS and still get more information to:

 #0: Do a lock profiling:

https://www.freebsd.org/cgi/man.cgi?query=LOCK_PROFILING

options LOCK_PROFILING

 Example of usage:

# Run
$ sudo sysctl debug.lock.prof.enable=1
$ sleep 10
$ sudo sysctl debug.lock.prof.enable=0

# Get results
$ sysctl debug.lock.prof.stats | head -2; sysctl debug.lock.prof.stats |
sort -n -k 4 -r

 You can also use Dtrace and lockstat (especially with the lockstat -s
option):

https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks
https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE

 But I am less familiar with Dtrace/lockstat tools.

>>  #1. Try above kernel options at least once, and see what you can get.
> 
> OK, I am try this after some time.
> 
>>  #2. If #1 is a total failure try below patch:  It won't solve anything,
>> it just makes tcp_tw_2msl_scan() less greedy when there is contention on
>> the INP write lock.  If it makes the debugging more feasible, continue
>> to #3.
> 
> OK, thanks.
> What purpose to not skip locked tcptw in this loop?

 If I understand your question correctly:  According to your pmcstat
result, tcp_tw_2msl_scan() currently struggles with a write lock
(__rw_wlock_hard) and the only write lock used tcp_tw_2msl_scan() is
INP_WLOCK.  No sign of contention on TW_RLOCK(V_tw_lock) currently.

51.86%  [2413083]  lock_delay @ /boot/kernel.VSTREAM/kernel
 100.0%  [2413083]   __rw_wlock_hard
  100.0%  [2413083]    tcp_tw_2msl_scan

--
Julien



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f644cd52-4377-aa90-123a-3a2887972bbc>