Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 5 Jan 1999 22:42:04 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        nate@mt.sri.com (Nate Williams)
Cc:        tlambert@primenet.com, wes@softweyr.com, bright@hotjobs.com, hackers@FreeBSD.ORG
Subject:   Re: question about re-entrancy.
Message-ID:  <199901052242.PAA24752@usr02.primenet.com>
In-Reply-To: <199901051946.MAA09199@mt.sri.com> from "Nate Williams" at Jan 5, 99 12:46:08 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> I looked up UITron on the WWW, and only got a hit from Sun, and it was
> lost in the noise.  I have a hard time seeing this as 'Industry
> Standard', but I'm sure you'll feel free to explain to me how it's a
> standard.

Dunno what search engine you are using, but you should consider
switching...

	http://www.cygnus.com/ecos/faq.html#14
	http://www.cygnus.com/ecos/micro.html
	http://tron.um.u-tokyo.ac.jp/TRON/ITRON/eng-spec.html
	http://tron.um.u-tokyo.ac.jp/TRON/ITRON/home-e.html
	http://tron.um.u-tokyo.ac.jp/TRON/ITRON/spec-e.html#ITRON3


> > 3)	Object locks are the wrong way to address the reentrancy
> > 	issue.
> 
> Sometime I wonder if you just like to listen to yourself speak.  Object
> locks *ARE* the correct way to address reentrancy issues.  Having done
> *real* multithreaded/multi-object design for some time now, I've found
> that Object locks are a *much* cleaner way of designing for re-entrant
> code.  (If you don't want something locked, you don't create an object
> for it...)
> 
> How much *REAL* (stuff that is used by someone outside yourself)
> experience do you have designing multi'threaded/whatever' software to
> speak with such authority?

Novell/USG employed me for four years to work on, among other things,
threading systems, FS's which had to live in SMP kernels, interprocess
context sharing mechanisms, and high efficiency process architectures.

Lets see.  I wrote an attributed FS that ran on SVR4.x ES/MP while
working for USL.  It's still shipped with the NetWare for UNIX 4.x
source code.  If you have $150,000, Novell will sell you a license.
It had to run on two SMP capable systems (SVR4 [UnixWare] and Solaris)
as the reference porting platforms, and was ported to others by vendors.
The intention mode lock manager is capable of 200 times as many
transactions per second as Bell Labs "Tuxedo" transaction system.  That's
2 orders of magnitude, if you are counting.

I was on the code review team for the NUC (NetWare UNIX Client) FS
for Novell/USG (the former USL), which operated in a large number
of SMP kernels, and contributed, I believe significantly, to the
design modifications that were arrived at.

I also did the PNW process architecture using a modified (by me) version
of DEC's MTS (MultiThreading Services), which is an AST-based call
conversion threads scheduler written in Bliss.  I made it work during
a port of Mentat Streams to VMS during a project to implement the
"PathWorks for VMS (NetWare)" server.  You can talk to Robert Withrow
about that one, since he was a contractor on the DEC side of the project.

I did the file descriptor sharing code for UnixWare, Solaris, and
AIX that allowed the NetWare 4.x work-to-do server processes to
share open file contexts.

I also came up with the streams-MUX based "hot engine scheduling".
Gee, that code running on UNIX outperforms Native Netware 4.x on
the exact same hardware; must have done something right there...

I saved Jack Vogel's original SMP code, and partially brought it up
to date -- perhaps you've heard of FreeBSD SMP, based on Peter and
Steve and others hacking up Jacks code with my modifications?

Along with Jeremy Allison, I brought FreeBSD pthreads into
compliance with the Draft 4.0 standard.

I supplied the threading model patches and other related patches
to the UMICH LDAP server code that were adopted as the core code
for the OpenLDAP project.

I modified the "Moscow Center for SPARC Computing"'s STL to operate
within a Draft 4 pthreads environment.


So, Nate, what have you done that compares with actually working
on code that runs in commercial SMP kernels, such that you can
claim with such authority that "a *much* cleaner way of designing
for re-entrant code" (which I never disputed) results in *better*
*actual* *performance*?

Just because object locks are an *easy* design problem doesn't mean
that they will yield the best performance.  They won't.  We can
prove this to ourselves by looking at how many processors Dynix can
scale to reasonably (32) vs. how many SVR4.2 can scale to reasonably
(4) before degrading more because of object lock contention than is
gained by adding more processors.

What we want is "better performance", not "easiest design".  The
latter is the reason we have The Big Giant Lock(tm) today.


> >       The problem with object locks is that it puts
> > 	objects that don't really need to be in a contention
> > 	domain into one in order to satisfy contention in what
> > 	are usually very small critical sections having to do
> > 	with list manipulation of pointers to the object.
> 
> So you're claiming that the 'Big Giant Lock' is the better way?  You
> can't have it both ways.

No.  I'm claiming critical section locks tend to be held for a
shorter duration than an object tends to be held (this should be
intuitively obvious -- critical sections are less persistant than
objects which are operated upon by both critical (redcode) and noncritical
sections).

Having critical sections that can be locked WITH THEIR OWN INIDIVIDUAL
LOCKS is a far cry from suggesting that critical sections should be
protected with The Big Giant Lock(tm).

For lists, the cost of associating the lock with the list explicitly
vs. implicitly is equivalent.  Architecturally, it makes sense to use
an exlicit association, since we need to have humans maintain the
code.


Please read the referenced Dynix memory allocator paper.

You might also want to read "The Magic Garden Explained", "UNIX for
Modern Architectures", and "UNIX Internals: The New Frontiers" for
an understanding of the use of object locks in SVR4 derivatives,
and the failings of the OS's that incorporate them.

The "UNIX Internals" book is particularly good in the discusion of the
Dynix allocator vs. the Solaris SLAB allocator (although, as I told
Uresh Vahalia as I was reviewing the manuscript for Prentice Hall, I
disagree with his conclusions about which is better, for the reasons
I've put forward in this thread -- I prefer a hybrid that allows per
CPU resource pools, per the Dynix allocator).



					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199901052242.PAA24752>