From owner-freebsd-scsi@FreeBSD.ORG Wed Sep 26 00:31:39 2007 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2A33116A421; Wed, 26 Sep 2007 00:31:39 +0000 (UTC) (envelope-from hselasky@c2i.net) Received: from swip.net (mailfe08.swip.net [212.247.154.225]) by mx1.freebsd.org (Postfix) with ESMTP id 5450113C4A8; Wed, 26 Sep 2007 00:31:38 +0000 (UTC) (envelope-from hselasky@c2i.net) X-Cloudmark-Score: 0.000000 [] Received: from [85.19.218.45] (account mc467741@c2i.net [85.19.218.45] verified) by mailfe08.swip.net (CommuniGate Pro SMTP 5.1.10) with ESMTPA id 624282714; Wed, 26 Sep 2007 01:31:35 +0200 From: Hans Petter Selasky To: freebsd-scsi@freebsd.org, freebsd-usb@freebsd.org, freebsd-net@freebsd.org Date: Wed, 26 Sep 2007 01:31:47 +0200 User-Agent: KMail/1.9.7 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200709260131.49156.hselasky@c2i.net> Cc: Subject: Request for feedback on common data backstore in the kernel X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Sep 2007 00:31:39 -0000 Hi, Please keep me CC'ed, hence I'm not on all these lists. In the kernel we currently have two different data backstores: struct mbuf and struct buf These two backstores serve two different device types. "mbufs" are for network devices and "buf" is for disk devices. Problem: The current backstores are loaded into DMA by using the BUS-DMA framework. This appears not to be too fast according to Kip Macy. See: http://perforce.freebsd.org/chv.cgi?CH=126455 Some ideas I have: When a buffer is out out of range for a hardware device and a data-copy is needed I want to simply copy that data in smaller parts to/from a pre-allocated bounce buffer. I want to avoid allocating this buffer when "bus_dmamap_load()" is called. For pre-allocated USB DMA memory I currently have: struct usbd_page struct usbd_page { void *buffer; // virtual address bus_size_t physaddr; // as seen by one of my devices bus_dma_tag_t tag; bus_dmamap_t map; uint32_t length; }; Mostly only "length == PAGE_SIZE" is allowed. When USB allocates DMA memory it allocates the same size all the way and that is PAGE_SIZE bytes. If two different PCI controllers want to communicate directly passing DMA buffers, technically one would need to translate the physical address for device 1 to the physical address as seen by device 2. If this translation table is sorted, the search will be rather quick. Another approach is to limit the number of translations: #define N_MAX_PCI_TRANSLATE 4 struct usbd_page { void *buffer; // virtual address bus_size_t physaddr[N_MAX_PCI_TRANSLATE]; bus_dma_tag_t tag; bus_dmamap_t map; uint32_t length; }; Then PCI device 1 on bus X can use physaddr[0] and PCI device 2 on bus Y can use physaddr[1]. If the physaddr[] is equal to some magic then the DMA buffer is not reachable and must be bounced. Then when two PCI devices talk together all they need to pass is a structure like this: struct usbd_page_cache { struct usbd_page *page_start; uint32_t page_offset_buf; uint32_t page_offset_end; }; And the required DMA address is looked up in some nanos. Has someone been thinking about this topic before ? --HPS