Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Dec 2007 08:57:32 -0800
From:      David G Lawrence <dg@dglawrence.com>
To:        Bruce Evans <brde@optusnet.com.au>
Cc:        freebsd-net@FreeBSD.org, freebsd-stable@FreeBSD.org
Subject:   Re: Packet loss every 30.999 seconds
Message-ID:  <20071218165732.GV25053@tnn.dglawrence.com>
In-Reply-To: <20071219022102.I34422@delplex.bde.org>
References:  <D50B5BA8-5A80-4370-8F20-6B3A531C2E9B@eng.oar.net> <20071217103936.GR25053@tnn.dglawrence.com> <20071218170133.X32807@delplex.bde.org> <47676E96.4030708@samsco.org> <20071218233644.U756@besplex.bde.org> <20071218141742.GS25053@tnn.dglawrence.com> <20071219022102.I34422@delplex.bde.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> I got an almost identical delay (with 64000 vnodes).
> 
> Now, 17ms isn't much.

   Says you. On modern systems, trying to run a pseudo real-time application
on an otherwise quiescent system, 17ms is just short of an eternity. I agree
that the syncer should be preemptable (which is what my bandaid patch
attempts to do), but that probably wouldn't have helped my specific problem
since my application was a user process, not a kernel thread.
   All of my systems have options PREEMPTION - that is the default in
6+. It doesn't affect this problem.
   On the other hand, the syncer shouldn't be consuming this much CPU in
the first place. There is obviously a bug here. Of course looking through
all of the vnodes in the system for something dirty is stupid in the
first place; there should be a seperate list for that. ...but a simple
fix is what is needed right now.
   I'm going to have to bow out of this discussion now. I just don't have
the time for it.

-DG

David G. Lawrence
President
Download Technologies, Inc. - http://www.downloadtech.com - (866) 399 8500
The FreeBSD Project - http://www.freebsd.org
Pave the road of life with opportunities.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071218165732.GV25053>