Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 12 Apr 2017 17:21:02 -0700
From:      Adrian Chadd <adrian.chadd@gmail.com>
To:        Slawa Olhovchenkov <slw@zxy.spb.ru>
Cc:        "stable@freebsd.org" <stable@freebsd.org>
Subject:   Re: Lock contention in AIO
Message-ID:  <CAJ-VmonmHDACnnOvVSwwQffnoUYO5q=aa-gferfaHXVUe7-1aQ@mail.gmail.com>
In-Reply-To: <20170321164227.GE86500@zxy.spb.ru>
References:  <20170321164227.GE86500@zxy.spb.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
It's the same pages, right? Is it just the refcounting locking that's
causing it?

I think the biggest thing here is to figure out how to have pages have
a lifecycle where the refcount can be inc/dec (obviously >1, ie not in
a state where you can dec to 0) via atomics, without grabbing a lock.
That'll make this particular use case muuuuch faster.

(dfbsd does this.)


-a


On 21 March 2017 at 09:42, Slawa Olhovchenkov <slw@zxy.spb.ru> wrote:
> I am see lock contetntion cuased by aio read (same file segment from
> multiple process simultaneous):
>
> 07.74%  [26756]    lock_delay @ /boot/kernel/kernel
>  92.21%  [24671]     __mtx_lock_sleep
>   52.14%  [12864]      vm_page_enqueue
>    100.0%  [12864]       vm_fault_hold
>     87.71%  [11283]        vm_fault_quick_hold_pages
>      100.0%  [11283]         vn_io_fault1
>       100.0%  [11283]          vn_io_fault
>        99.88%  [11270]           aio_process_rw
>         100.0%  [11270]            aio_daemon
>          100.0%  [11270]             fork_exit
>        00.12%  [13]              dofileread
>         100.0%  [13]               kern_readv
>
> Is this know problem?
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmonmHDACnnOvVSwwQffnoUYO5q=aa-gferfaHXVUe7-1aQ>