From owner-freebsd-hackers@FreeBSD.ORG Sun Jan 10 20:13:02 2010 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 236B4106566B for ; Sun, 10 Jan 2010 20:13:02 +0000 (UTC) (envelope-from max@love2party.net) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.9]) by mx1.freebsd.org (Postfix) with ESMTP id C6EB98FC13 for ; Sun, 10 Jan 2010 20:13:01 +0000 (UTC) Received: from vampire.homelinux.org (dslb-088-067-233-140.pools.arcor-ip.net [88.67.233.140]) by mrelayeu.kundenserver.de (node=mrbap2) with ESMTP (Nemesis) id 0LjtbV-1O1UA91qRt-00bkCO; Sun, 10 Jan 2010 21:13:00 +0100 Received: (qmail 47818 invoked from network); 10 Jan 2010 20:12:59 -0000 Received: from f8x64.laiers.local (192.168.4.188) by ns1.laiers.local with SMTP; 10 Jan 2010 20:12:59 -0000 From: Max Laier Organization: FreeBSD To: freebsd-hackers@freebsd.org Date: Sun, 10 Jan 2010 21:12:59 +0100 User-Agent: KMail/1.12.4 (FreeBSD/8.0-RELEASE; KDE/4.3.4; amd64; ; ) References: <201001081414.o08EEaBM053148@casselton.net> <201001081113.30008.jhb@freebsd.org> In-Reply-To: <201001081113.30008.jhb@freebsd.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201001102112.59479.max@love2party.net> X-Provags-ID: V01U2FsdGVkX18CIZJpEbKPphM+E7cULxo79Se+6zm8hEzz6ls UYpjw06nyVkAI/kEllIH5pBmF6aMemo4dN4IwbR2iZOeR3iogx JAmSf1XkYaUqmJnYbAqLw== Cc: Mark Tinguely Subject: Re: bus_dmamap_load_uio() and user data X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 Jan 2010 20:13:02 -0000 On Friday 08 January 2010 17:13:29 John Baldwin wrote: > On Friday 08 January 2010 9:14:36 am Mark Tinguely wrote: > > > You should use the pmap from the thread in the uio structure. Similar > > > to this from the x86 bus_dma code: > > > > > > if (uio->uio_segflg == UIO_USERSPACE) { > > > KASSERT(uio->uio_td != NULL, > > > ("bus_dmamap_load_uio: USERSPACE but no > > > proc")); pmap = vmspace_pmap(uio->uio_td->td_proc->p_vmspace); } else > > > pmap = NULL; > > > > > > Later when doing VA -> PA conversions the code does this: > > > > > > if (pmap) > > > paddr = pmap_extract(pmap, vaddr); > > > else > > > paddr = pmap_kextract(vaddr); > > > > We do that, but I notice that all the architecture that implement > > bounce buffers assume the VA is in the current map. Most of the > > addresses are KVA, but bus_dmamap_load_uio() can be in the user space. > > > > I was wondering about the sequence: > > > > bus_dmamap_load_uio() user space > > dma_load_buffer() > > add bounce page save UVA (in caller user map) > > > > later: > > > > bus_dma_sync > > copies bounce buffer from saved UVA. <- here is my concern. The user > > pmap is not remembered use current pmap. > > > > Since the bounce buffer copy routines have been running in other > > architectures for years without corruption, I was wondering we can safely > > assume that the dma sync is running in the same thread/address space as > > the bus_dmamap_load_uio call. I was hoping you would say, don't worry the > > scheduler would always reload the same thread to execute the dma sync code > > ... > > Ahh. I think bus_dmamap_load_uio() doesn't do deferred callbacks (i.e. > mandates BUS_DMA_NOWAIT), and probably is always invoked from curthread. > Even in the case of aio, the thread's vmspace is the effective one at the > time bus_dmamap_load_uio() would be invoked, so in practice it is safe. I ran into ?this? problem with bus_dmamap_sync and bounce buffers while trying to do a BUS_DMASYNC_POSTREAD in interrupt context. The sync code was trying to copy from bounce buffer to the UVA without proper context -> SEGFAULT. I tried to move the sync to the ioctl context that is used by the userland to figure out which part of the buffer is "ready" ... this /kind of/ worked, but lead to DMA problems in ata (which I didn't yet investigate) when trying to write the buffer to disk. I meanwhile moved to exporting a kernel buffer instead, using OBJT_SG - which is a bit more work and eats KVA, but at least on amd64 there is no shortage of that. -- Max