From owner-freebsd-stable@FreeBSD.ORG Wed Dec 19 18:09:34 2007 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C720B16A420; Wed, 19 Dec 2007 18:09:34 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail11.syd.optusnet.com.au (mail11.syd.optusnet.com.au [211.29.132.192]) by mx1.freebsd.org (Postfix) with ESMTP id 6986413C448; Wed, 19 Dec 2007 18:09:34 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from besplex.bde.org (c211-30-219-213.carlnfd3.nsw.optusnet.com.au [211.30.219.213]) by mail11.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id lBJI9Ni5005222 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 20 Dec 2007 05:09:25 +1100 Date: Thu, 20 Dec 2007 05:09:23 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Bruce Evans In-Reply-To: <20071220032223.V38101@delplex.bde.org> Message-ID: <20071220044515.K4939@besplex.bde.org> References: <20071217103936.GR25053@tnn.dglawrence.com> <20071218170133.X32807@delplex.bde.org> <47676E96.4030708@samsco.org> <20071218233644.U756@besplex.bde.org> <20071218141742.GS25053@tnn.dglawrence.com> <20071219022102.I34422@delplex.bde.org> <20071218165732.GV25053@tnn.dglawrence.com> <20071218181023.GW25053@tnn.dglawrence.com> <20071219235444.K928@besplex.bde.org> <20071219151926.GA25053@tnn.dglawrence.com> <20071220032223.V38101@delplex.bde.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-net@freebsd.org, freebsd-stable@freebsd.org Subject: Re: Packet loss every 30.999 seconds X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Dec 2007 18:09:34 -0000 On Thu, 20 Dec 2007, Bruce Evans wrote: > On Wed, 19 Dec 2007, David G Lawrence wrote: >> Considering that the CPU clock cycle time is on the order of 300ps, I >> would say 125ns to do a few checks is pathetic. > > As I said, 125 nsec is a short time in this context. It is approximately > the time for a single L2 cache miss on a machine with slow memory like > freefall (Xeon 2.8 GHz with L2 cache latency of 155.5 ns). As I said, Perfmon counts for the cache misses during sync(1); ==> /tmp/kg1/z0 <== vfs.numvnodes: 630 # s/kx-dc-accesses 484516 # s/kx-dc-misses 20852 misses = 4% ==> /tmp/kg1/z1 <== vfs.numvnodes: 9246 # s/kx-dc-accesses 884361 # s/kx-dc-misses 89833 misses = 10% ==> /tmp/kg1/z2 <== vfs.numvnodes: 20312 # s/kx-dc-accesses 1389959 # s/kx-dc-misses 178207 misses = 13% ==> /tmp/kg1/z3 <== vfs.numvnodes: 80802 # s/kx-dc-accesses 4122411 # s/kx-dc-misses 658740 misses = 16% ==> /tmp/kg1/z4 <== vfs.numvnodes: 138557 # s/kx-dc-accesses 7150726 # s/kx-dc-misses 1129997 misses = 16% === I forgot to only count active vnodes in the above. vfs.freevnodes was small (< 5%). I set kern.maxvnodes to 200000, but vfs.numvnodes saturated at 138557 (probably all that fits in kvm or main memory on i386 with 1GB RAM). With 138557 vnodes, a null sync(2) takes 39673 us according to kdump -R. That is 35.1 ns per miss. This is consistent with lmbench2's estimate of 42.5 ns for main memory latency. Watching vfs.*vnodes confirmed that vnode caching still works like you said: o "find /home/ncvs/ports -type f" only gives a vnode for each directory o a repeated "find /home/ncvs/ports -type f" is fast because everything remains cached by VMIO. FreeBSD performed very badly at this benchmark before VMIO existed and was used for directories o "tar cf /dev/zero /home/ncvs/ports" gives a vnode for files too. Bruce