Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Dec 2007 06:17:42 -0800
From:      David G Lawrence <dg@dglawrence.com>
To:        Bruce Evans <brde@optusnet.com.au>
Cc:        freebsd-net@FreeBSD.ORG, Scott Long <scottl@samsco.org>, freebsd-stable@FreeBSD.ORG
Subject:   Re: Packet loss every 30.999 seconds
Message-ID:  <20071218141742.GS25053@tnn.dglawrence.com>
In-Reply-To: <20071218233644.U756@besplex.bde.org>
References:  <D50B5BA8-5A80-4370-8F20-6B3A531C2E9B@eng.oar.net> <20071217103936.GR25053@tnn.dglawrence.com> <20071218170133.X32807@delplex.bde.org> <47676E96.4030708@samsco.org> <20071218233644.U756@besplex.bde.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> >Right, it's a non-optimal loop when N is very large, and that's a fairly
> >well understood problem.  I think what DG was getting at, though, is
> >that this massive flush happens every time the syncer runs, which
> >doesn't seem correct.  Sure, maybe you just rsynced 100,000 files 20
> >seconds ago, so the upcoming flush is going to be expensive.  But the
> >next flush 30 seconds after that shouldn't be just as expensive, yet it
> >appears to be so.
> 
> I'm sure it doesn't cause many bogus flushes.  iostat shows zero writes
> caused by calling this incessantly using "while :; do sync; done".

   I didn't say it caused any bogus disk I/O. My original problem
(after a day or two of uptime) was an occasional large scheduling delay
for a process that needed to process VoIP frames in real-time. It was
happening every 31 seconds and was causing voice frames to be dropped
due to the large latency causing the frame to be outside of the jitter
window. I wrote a program that measures the scheduling delay by sleeping
for one tick and then comparing the timeofday offset from what was
expected. This revealed that every 31 seconds, the process was seeing
a 17ms delay in scheduling. Further investigation found that 1) the
syncer was the process that was running every 31 seconds and causing
the delay (and it was the only one in the system with that timing
interval), and that 2) lowering the kern.maxvnodes to something lowish
(5000) would mostly mitigate the problem. The patch to limit the number
of vnodes to process in the loop before sleeping was then developed
and it completely resolved the problem. Since the wait that I added
is at the bottom of the loop and the limit is 500 vnodes, this tells
me that every 31 seconds, there are a whole lot of vnodes that are
being "synced", when there shouldn't have been any (this fact wasn't
apparent to me at the time, but when I later realized this, I had
no time to investigate further). My tests and analysis have all been
on an otherwise quiet system (no disk I/O), so the bottom of the
ffs_sync vnode loop should not have been reached at all, let alone
tens of thousands of times every 31 seconds. All machines were uni-
processor, FreeBSD 6+. I don't know if this problem is present in 5.2.
I didn't see ffs_syncvnode in your call graph, so it probably is not.
   Anyway, someone needs to instrument the vnode loop in ffs_sync and
figure out what is going on. As you've pointed out, it is necessary
to first read a lot of files (I use tar to /dev/null and make sure it
reads at least 100K files) in order to get the vnodes allocated. As
I mentioned previously, I suspect that either ip->i_flag is not getting
completely cleared in ffs_syncvnode or its children or
v_bufobj.bo_dirty.bv_cnt accounting is broken.

-DG

David G. Lawrence
President
Download Technologies, Inc. - http://www.downloadtech.com - (866) 399 8500
The FreeBSD Project - http://www.freebsd.org
Pave the road of life with opportunities.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071218141742.GS25053>