Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 28 Jun 2003 15:33:25 -0600
From:      "Justin T. Gibbs" <gibbs@scsiguy.com>
To:        Scott Long <scottl@freebsd.org>, John Baldwin <jhb@freebsd.org>
Cc:        freebsd-arch@freebsd.org
Subject:   Re: API change for bus_dma
Message-ID:  <2768600000.1056836005@aslan.scsiguy.com>
In-Reply-To: <3EFDC2EF.1060807@freebsd.org>
References:  <XFMail.20030627112702.jhb@FreeBSD.org> <3EFDC2EF.1060807@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> Ok, after many semi-private discussions, how about this:

There is only one problem with this strategy.  The original idea
of using a mutex allowed the busdma API to use that same mutex as
the strategy for locking the fields of the tag, dmamap, etc.  In
other-words, the agreement would have been that the caller always
has the lock held before calling into bus dma, so that bus dma
only has to grab additional locks to protect data shared with
other clients.  For this to work in the more general scheme, you
would have to register "acquire lock"/"release lock" functions in
the tag since locking within the callback does not allow for the
protection of the tag or dmamap fields in the deferred case (they
would only be protected *during* the callback).

Again, what we want to achieve is as few lock acquires and releases
in the common case as possible.  For architectures like x86, the only
data structure that needs to be locked for the common case of no deferral
and no bounce page allocations is the tag (it will soon hold the S/G list
passed to the callback).  Other implementations may need to acquire other
locks, but using the client's lock still removes one lock acquire and
release in each invocation that is not deferred.

--
Justin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2768600000.1056836005>