Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Aug 2001 13:55:14 -0700 (PDT)
From:      wpaul@FreeBSD.ORG (Bill Paul)
To:        gibbs@scsiguy.com (Justin T. Gibbs)
Cc:        mjacob@feral.com, hackers@FreeBSD.ORG, current@FreeBSD.ORG
Subject:   Re: Where to put new bus_dmamap_load_mbuf() code
Message-ID:  <20010822205514.52C0A37B42B@hub.freebsd.org>
In-Reply-To: <200108210623.f7L6NdY90348@aslan.scsiguy.com> from "Justin T. Gibbs" at "Aug 21, 2001 00:23:39 am"

next in thread | previous in thread | raw e-mail | index | archive | help
> >My understanding is that you need a dmamap for every buffer that you want
> >to map into bus space.
> 
> You need one dmamap for each independantly manageable mapping.  A
> single mapping may result in a long list of segments, regardless
> of whether you have a single KVA buffer or multiple KVA buffers
> that might contribute to the mapping.

Yes yes, I understand that. But that's only if you want to map
a buffer that's larger than PAGE_SIZE bytes, like, say, a 64K
buffer being sent to a disk controller. What I want to make sure
everyone understands here is that I'm not typically dealing with
buffers this large: instead I have lots of small buffers that are
smaller than PAGE_SIZE bytes. A single mbuf alone is only 256
bytes, of which only a fraction is used for data. An mbuf cluster
buffer is usually only 2048 bytes. Transmitted packets are typically
fragmented across 2 or 3 mbufs: the first mbuf contains the header,
and the other two contain data. (Or the first one contains part
of the header, the second one contains additional header data,
and the third contains data -- whatever.) At most I will have 1500
bytes of data to send, which is less than PAGE_SIZE, and that 1500
bytes will be fragmented across a bunch of smaller buffers that
are also smaller than PAGE_SIZE. Therefore I will not have one
dmamap with multiple segments: I will have a bunch of dmamaps
with one segment each.

(I can hear somebody out there saying: "What about jumbo frames?"
Yes, with jumbo frames, I will have 9K buffers to deal with, and
in that case, you could have one dmamap with several segments, and
I am taking this into account with the updated code I've written.)

> >So unless I'm mistaken, for each mbuf in an mbuf list, what we
> >have to do is this:
> >
> >- create a bus_dmamap_t for the data area in the mbuf using
> >  bus_dmamap_create()
> 
> Creating a dmamap, depending on the architecture, could be expensive.
> You really want to create them in advance (or pool them), with at most
> one dmamap per concurrent transaction you support in your driver.

The only problem here is that I can't really predict how many transactions
will be going at one time. I will have at least RX_DMA_RING maps (one for
each mbuf in the RX DMA ring), and some fraction of TX_DMA_RING maps.
I could have the TX DMA ring completely filled with packets waiting
to be DMA'ed and transmitted, or I may have only one entry in the ring
currently in use. So I guess I have to allocate RX_DMA_RING + TX_DMA_RING
dmamaps in order to be safe.

> >- do the physical to bus mapping with bus_dmamap_load()
> 
> bus_dmamap_load() only understands how to map a single buffer.
> You will have to pull pieces of bus_dmamap_load into a new
> function (or create inlines for common bits) to do this
> correctly.  The algorithm goes something like this:
> 
> 	foreach mbuf in the mbuf chain to load
> 		/*
> 		 * Parse this contiguous piece of KVA into
> 		 * its bus space regions.
> 		 */
> 		foreach "bus space" discontiguous region
> 			if (too_many_segs)
> 				return (error);
> 			Add new S/G element
> 
> With the added complications of deferring the mapping if we're
> out of space, issuing the callback, etc.

Why can't I just call bus_dmamap_load() multiple times, once for
each mbuf in the mbuf list?

(Note: for the record, an mbuf list usually contains one packet
fragmented across multiple mbufs. An mbuf chain contains several
mbuf lists, linked together via the m_nextpkt pointer in the
header of the first mbuf in each list. By the time we get to
the device driver, we always have mbuf lists only.)

> Chances are you are going to use the map again soon, so destroying
> it on every transaction is a waste.

Ok, I spent some more time on this. I updated the code at:

http://www.freebsd.org/~wpaul/busdma

The changes are:

- Tried to account for the case where an mbuf data region is larger
  than a page, i.e. when we have an mbuf with a 9K external buffer
  attached for use a jumbo ethernet frame.
- Added routines to allocate a chunk of maps in a singly linked list,
  from which the other routines can grab them as needed. The driver
  attach routine calls bus_dmamap_list_init() with the max number of
  dmamaps that it will need, then the detach routine calls
  bus_dmamap_list_destroy() to nuke them when the driver is unloaded.
  The bus_dmamap_load_mbuf() routine uses the pre-allocated dmamaps
  from the list and bus_dmamap_list_destroy() returns them to the
  list when the transaction is completed.
- Updated the modified if_sf driver to use the new code.

Again, I've got this code running on the test box in the lab, so it's
correct inasmuch as it compiles and runs, even though it may not be
aesthetically pleasing.

-Bill 

--
=============================================================================
-Bill Paul            (510) 749-2329 | Senior Engineer, Master of Unix-Fu
                 wpaul@windriver.com | Wind River Systems
=============================================================================
"I like zees guys. Zey are fonny guys. Just keel one of zem." -- The 3 Amigos
=============================================================================

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010822205514.52C0A37B42B>