Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Sep 2007 01:31:47 +0200
From:      Hans Petter Selasky <hselasky@c2i.net>
To:        freebsd-scsi@freebsd.org, freebsd-usb@freebsd.org, freebsd-net@freebsd.org
Subject:   Request for feedback on common data backstore in the kernel
Message-ID:  <200709260131.49156.hselasky@c2i.net>

next in thread | raw e-mail | index | archive | help
Hi,

Please keep me CC'ed, hence I'm not on all these lists.

In the kernel we currently have two different data backstores:

struct mbuf

and 

struct buf

These two backstores serve two different device types. "mbufs" are for network 
devices and "buf" is for disk devices.

Problem:

The current backstores are loaded into DMA by using the BUS-DMA framework. 
This appears not to be too fast according to Kip Macy. See:

http://perforce.freebsd.org/chv.cgi?CH=126455

Some ideas I have:

When a buffer is out out of range for a hardware device and a data-copy is 
needed I want to simply copy that data in smaller parts to/from a 
pre-allocated bounce buffer. I want to avoid allocating this buffer 
when "bus_dmamap_load()" is called.

For pre-allocated USB DMA memory I currently have:

struct usbd_page

struct usbd_page {
        void                    *buffer; // virtual address
        bus_size_t              physaddr; // as seen by one of my devices
        bus_dma_tag_t           tag;
        bus_dmamap_t            map;
        uint32_t                length;
};

Mostly only "length == PAGE_SIZE" is allowed. When USB allocates DMA memory it 
allocates the same size all the way and that is PAGE_SIZE bytes.

If two different PCI controllers want to communicate directly passing DMA 
buffers, technically one would need to translate the physical address for 
device 1 to the physical address as seen by device 2. If this translation 
table is sorted, the search will be rather quick. Another approach is to 
limit the number of translations:

#define N_MAX_PCI_TRANSLATE 4

struct usbd_page {
        void                    *buffer; // virtual address
        bus_size_t              physaddr[N_MAX_PCI_TRANSLATE];
        bus_dma_tag_t           tag;
        bus_dmamap_t            map;
        uint32_t                length;
};

Then PCI device 1 on bus X can use physaddr[0] and PCI device 2 on bus Y can 
use physaddr[1]. If the physaddr[] is equal to some magic then the DMA buffer 
is not reachable and must be bounced.

Then when two PCI devices talk together all they need to pass is a structure 
like this:

struct usbd_page_cache {
        struct usbd_page        *page_start;
        uint32_t                page_offset_buf;
        uint32_t                page_offset_end;
};

And the required DMA address is looked up in some nanos.

Has someone been thinking about this topic before ?

--HPS



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200709260131.49156.hselasky>