Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 17 Sep 1998 14:28:41 +0200
From:      Eivind Eklund <eivind@yes.no>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        freebsd-hackers@FreeBSD.ORG
Subject:   Re: Unused functions
Message-ID:  <19980917142841.01218@follo.net>
In-Reply-To: <199809162132.OAA27690@usr04.primenet.com>; from Terry Lambert on Wed, Sep 16, 1998 at 09:32:41PM %2B0000
References:  <19980916100311.62440@follo.net> <199809162132.OAA27690@usr04.primenet.com>

next in thread | previous in thread | raw e-mail | index | archive | help
[I'm keeping a lot of context to make this easy to read; I also note
that I didn't move this to -chat when I thought I did, but we're
getting into some locking-related discussions that are again related
to -hackers, so I keep it here now]

On Wed, Sep 16, 1998 at 09:32:41PM +0000, Terry Lambert wrote:
> > Both algorithm 1 and 2 are o(N) and O(N) in the number of edges
> > connecting the set of used symbols.  The only case you can avoid
> > examining all of these edges is if all symbols are included in the
> > final used set; then you could do a cutoff.  I don't think it would be
> > worthwhile, though.
> 
> Well, I think it would, at least if the theory were that you didn't
> link against things you planned to use, and you were only eliminating
> dead code in objects from an archive where you knew you were using
> at least one or more symbols (which is the case under consideration).

Well, it's two lines extra code to check if you're all out of symbols
(ie, the list containing symbols of "unknown" status is empty).

> > Why is this easier/faster than the trivial algorithms I present above?
> 
> Because it's O(2) for the test, and only walks the portion of the
> tree related to the symbol being tested.  8-).

In this case: 2 what's?  And how is walking "the portion of the tree
related symbol being tested" compatible with O(2)?

Finding a Hamiltonian cycle is a classic NP-complete problem, ie, only
solvable in exponential time.  I'm very curious how this improve on my
simple linear time.  (And unfortunately I still don't get it any more
than I did intially :-(

[On what I'm missing wrt using Hamiltonian cycles for this]
> You should look at the section of the Sedgewick C++ book on
> Hamiltonian cycles.

I'll go out and buy this on monday, probably.  I don't think I'll have
time before.  (That it is missing is really is a gaping hole in my
bookcase...)

> This is really useful for runtime optimizations for things like
> a hierarchical SMP lock manager.  You attach all nodes into the
> graph for the objects that are intended to be used, and calculate
> a Warshalls.  Then, as you add leaves, you update using standard
> Floyd-Steinberg, so the Warshall's is always accurate.
> 
> This allows you to insert leaf nodes in O(1) along the vector to
> the root of the graph.
> 
> For a list of locks you want to test for deadlock, the list itself
> describes a vector.
> 
> Using this vector as an arc through the existing Warshall-complete
> graph, you look for a Hamiltonian cycle.  If the cycle exists, then
> you have a deadlock condition.

Don't you have a deadlock condition if _any_ cycle exists?

Ie, you grab a bunch of locks, blocking on an attempted lock of X.
Whatever is holding X end up blocking on one of your locks.  You've
got a cycle, and thus you've got a deadlock.

Why do you need Hamiltonian cycles?  They're a pain to find, and I
can't see that looking for them gets you anything; I can't see that
whether they are present make a difference for this case.

> This allows deadlock detection without having to do stack rewind
> and/or resource precommit (presuming that the tentative lock vector's
> elements are inserted using intention modes to prevent another
> tentative lock from coming back "safe").
> 
> This algorithm was actually the basis of the lock manager in the
> NetWare for UNIX server; it was capable of 20,000 transactions a
> second -- basically 2 orders of magnitude better than Tuxedo.  8-).

Nice!  :-)

> The computation of deadlock (transitive closure) is done, as in the
> symbol table example, by inferring Hamiltonian cycles along the two
> edges you know about (the real one and the pretend one made up of
> the resources you will require for a given idempotent operation).
> 
> This is terrifically fun stuff to watch when it works.  It's much
> better than Djikstra's soloution (the banker's algorithm), in
> that you wouldn't have to rebuild FreeBSD from the ground up to
> make it work...

That way to look for deadlocks look right for FreeBSD.  However, I
still don't get why it need Hamiltonian (as opposed to plain) cycles
:-(

Eivind.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19980917142841.01218>