Date: Sat, 3 Jun 2006 11:10:23 -0400 From: Kris Kennaway <kris@obsecurity.org> To: Sven Petai <hadara@bsd.ee> Cc: freebsd-current@freebsd.org, Robert Watson <rwatson@freebsd.org>, Kris Kennaway <kris@obsecurity.org> Subject: Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets) Message-ID: <20060603151023.GA341@xor.obsecurity.org> In-Reply-To: <200605142221.46093.hadara@bsd.ee> References: <20060506150622.C17611@fledge.watson.org> <20060507230430.GA6872@xor.obsecurity.org> <20060508065207.GA20386@xor.obsecurity.org> <200605142221.46093.hadara@bsd.ee>
next in thread | previous in thread | raw e-mail | index | archive | help
--PEIAKu/WMn1b1Hv9 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Sun, May 14, 2006 at 10:21:45PM +0300, Sven Petai wrote: > > int > > chgsbsize(uip, hiwat, to, max) > > struct uidinfo *uip; > > u_int *hiwat; > > u_int to; > > rlim_t max; > > { > > rlim_t new; > > > > UIDINFO_LOCK(uip); > > > > So the next question is how can that be optimized? > > > > Kris >=20 > hi >=20 > on the 8 core machine this lock was the top contended one with rwatsons p= atch,=20 > with over 8 million failed acquire attempts. > Originally the unp lock had only ~3 million of those, so this explains th= e=20 > sharp drop with larger number of threads I suppose. >=20 > I feel like I'm missing some very obvious reason, but wouldn't the simple= st=20 > workaround be just to return 1 right away if limit is set to infinity, wh= ich=20 > is almost always the case since it's the default, and document on the=20 > login.conf manpage that you might take performance hit with this type of= =20 > workloads when you set sbsize limits. I tried removing the locking here but did not see a performance change, so I concluded that it's not actually a bottleneck. FYI, I have been working on the locking profiling tools quite a bit lately, and also have started profiling on a 32-thread sun4v system. I hope to have the patches ready to send out soon (they fix a serious design error in mutex profiling that makes some of the profiling stats meaningless, substantially fix performance (20%-25% cost at the moment instead of >80%), and I also have an implementation of spinlock profiling using ktr that seems to be extremely cheap). All of my other large MP systems are offline though, so the only machines I have for profiling right now are a dual p4 xeon and Kip Macy's 32-way T1 :-) > I wonder if I should set up automatic&periodic performance testing > system, that would run all the tests for example once a week, with > latest current and stable, so that it would be easier for developers > to see how changes affect different workloads. >=20 > If you guys think it would be worthwile, what would be the bechmarks > you would like to see in addition to mysql+supersmack ? This kind of thing might be a bit tricky to set up, but it would be well worth it! Kris --PEIAKu/WMn1b1Hv9 Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (FreeBSD) iD8DBQFEgaZfWry0BWjoQKURAhvHAKDwtO2+rnyrUjk+AMCvwVb6BXaf3wCgtE4F XuLbXHe2C0ie+Z7QW8larcU= =iqLk -----END PGP SIGNATURE----- --PEIAKu/WMn1b1Hv9--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060603151023.GA341>