From owner-freebsd-arch@FreeBSD.ORG Wed Apr 29 19:17:51 2015 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1CAB49A2; Wed, 29 Apr 2015 19:17:51 +0000 (UTC) Received: from mail-ig0-x231.google.com (mail-ig0-x231.google.com [IPv6:2607:f8b0:4001:c05::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D493B1B49; Wed, 29 Apr 2015 19:17:50 +0000 (UTC) Received: by igblo3 with SMTP id lo3so57612943igb.0; Wed, 29 Apr 2015 12:17:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Cbly6yhSt5GjOHKxKY6CxUDLVgmbdC0Ml3zaTzPTmjk=; b=hP8Ry2CKwtoPwHkWsY9BDKNDVBa/EX7QcyphC37jdoGUWEP9omnMTIPe0JHwUIXB2i DcWXBBp6VyESIz+2SowCDFlhtFuGQCRIQsfBC3Dw8GZiDSlz+z4+7JRIllXTrZxqICob 7qDFWh7bp2Cx+++vU3Gqy82OjHC2m92bKMP9oIun68zTGkEnLV0J5G3O+ScnggknvHGI 68UQn/+0h4MixAAw1Pw1nvmy0zH6SyGyV1nzRGVOmWgD3vWdvhvFcPBSmccq6Im0lG7N x0PAcQPFL04IdQHs0hGIIYN+iOi+YxiN10bbqiKfh40H3iZR5xZqetyo32dSeSy3EYTm kpLQ== MIME-Version: 1.0 X-Received: by 10.107.11.211 with SMTP id 80mr887093iol.18.1430335070140; Wed, 29 Apr 2015 12:17:50 -0700 (PDT) Received: by 10.36.106.70 with HTTP; Wed, 29 Apr 2015 12:17:50 -0700 (PDT) In-Reply-To: <20150429185019.GO2390@kib.kiev.ua> References: <38574E63-2D74-4ECB-8D68-09AC76DFB30C@bsdimp.com> <1761247.Bq816CMB8v@ralph.baldwin.cx> <20150429132017.GM2390@kib.kiev.ua> <20150429165432.GN2390@kib.kiev.ua> <20150429185019.GO2390@kib.kiev.ua> Date: Wed, 29 Apr 2015 14:17:50 -0500 Message-ID: Subject: Re: bus_dmamap_sync() for bounced client buffers from user address space From: Jason Harmening To: Konstantin Belousov Cc: Svatopluk Kraus , John Baldwin , Adrian Chadd , Warner Losh , freebsd-arch Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Apr 2015 19:17:51 -0000 > > > The spaces/tabs in your mail are damaged. It does not matter in the > text, but makes the patch unapplicable and hardly readable. > Ugh. I'm at work right now and using the gmail web client. It seems like every day I find a new way in which that thing is incredibly unfriendly for use with mailing lists. I will re-post the patch from a sane mail client later. > > I only read the x86/busdma_bounce.c part. It looks fine in the part > where you add the test for the current pmap being identical to the pmap > owning the user page mapping. > > I do not understand the part of the diff for bcopy/physcopyout lines, > I cannot find non-whitespace changes there, and whitespace change would > make too long line. Did I misread the patch ?\ > You probably misread it, since it is unreadable. There is a section in bounce_bus_dmamap_sync() where I check for map->pmap being kernel_pmap or curproc's pmap before doing bcopy. > > BTW, why not use physcopyout() unconditionally on x86 ? To avoid i386 sfbuf > allocation failures ? > Yes. > > For non-coherent arches, isn't the issue of CPUs having filled caches > for the DMA region present regardless of the vm_fault_quick_hold() use ? > DMASYNC_PREREAD/WRITE must ensure that the lines are written back and > invalidated even now, or always fall back to use bounce page. > > Yes, that needs to be done regardless of how the pages are wired. The particular problem here is that some caches on arm and mips are virtually-indexed (usually virtually-indexed, physically-tagged (VIPT)). That means the flush/invalidate instructions need virtual addresses, so figuring out the correct UVA to use for those could be a challenge. As I understand it, VIPT caches usually do have some hardware logic for finding all the cachelines that correspond to a physical address, so they can handle multiple VA mappings of the same PA. But it is unclear to me how cross-processor cache maintenance is supposed to work with VIPT caches on SMP systems. If the caches were physically-indexed, then I don't think there would be an issue. You'd just pass the PA to the flush/invalidate instruction, and presumably a sane SMP implementation would propagate that to other cores via IPI.