Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 27 Aug 2012 13:18:49 -0600
From:      Warner Losh <imp@bsdimp.com>
To:        Ian Lepore <freebsd@damnhippie.dyndns.org>
Cc:        freebsd-arch@freebsd.org, freebsd-arm@freebsd.org, freebsd-mips@freebsd.org, Hans Petter Selasky <hans.petter.selasky@bitfrost.no>
Subject:   Re: Partial cacheline flush problems on ARM and MIPS
Message-ID:  <FBBEF2E7-B6E9-4F2D-A36E-7DE40666E50F@bsdimp.com>
In-Reply-To: <1346094047.1140.264.camel@revolution.hippie.lan>
References:  <1345757300.27688.535.camel@revolution.hippie.lan> <3A08EB08-2BBF-4B0F-97F2-A3264754C4B7@bsdimp.com> <1345763393.27688.578.camel@revolution.hippie.lan> <FD8DC82C-AD3B-4EBC-A625-62A37B9ECBF1@bsdimp.com> <1345765503.27688.602.camel@revolution.hippie.lan> <CAJ-VmonOwgR7TNuYGtTOhAbgz-opti_MRJgc8G%2BB9xB3NvPFJQ@mail.gmail.com> <1345766109.27688.606.camel@revolution.hippie.lan> <CAJ-VmomFhqV5rTDf-kKQfbSuW7SSiSnqPEjGPtxWjaHFA046kQ@mail.gmail.com> <F8C9E811-8597-4ED0-9F9D-786EB2301D6F@bsdimp.com> <1346002922.1140.56.camel@revolution.hippie.lan> <6D83AF9D-577B-4C83-84B7-C4E3B32695FC@bsdimp.com> <1346083716.1140.212.camel@revolution.hippie.lan> <CAJ-Vmo=Mh1dav4DYW8=yjyBh-qmuAfSKpbFFU9JZSXqYqwLLVg@mail.gmail.com> <1346094047.1140.264.camel@revolution.hippie.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

On Aug 27, 2012, at 1:00 PM, Ian Lepore wrote:

> On Mon, 2012-08-27 at 09:53 -0700, Adrian Chadd wrote:
>> On 27 August 2012 09:08, Ian Lepore <freebsd@damnhippie.dyndns.org> =
wrote:
>>=20
>>> If two DMAs are going on concurrently in the same buffer, one is =
going
>>> to finish before the other, leading to a POSTxxxx sync op happening =
for
>>> one DMA operation while the other is still in progress.  The unit of
>>> granularity for sync operations is the mapped region, so now you're
>>> syncing access to a region which still has active DMA happening =
within
>>> it.
>>=20
>> Right. But the enforced idea is "DMA up to this point should be
>> flushed to memory."
>>=20
>>> While I think it's really an API definition issue, think about it in
>>> terms of a potential implementation... What if the CPU had to access =
the
>>> memory as part of the sync for the first DMA that completes, while =
the
>>> second is still running?  Now you've got pretty much exactly the =
same
>>> situation as when a driver subdivides a buffer without knowing about =
the
>>> cache alignment; you end up with the CPU and DMA touching data in =
the
>>> same cachline and no sequence of flush/invalidate can be g'teed to
>>> preserve all data correctly.
>>=20
>> Right. So you realise at that point you can't win and you stick each
>> of those pieces in a different cache line.
>>=20
>=20
> Actually, I think that even discussing cache lines in this context is =
a
> mistake (yeah, I'm the one who did so above, in trying to relate an
> abstract API design concept to a real-world hardware example).
>=20
> Drivers are not supposed to know about interactions between DMA
> transfers and cache lines or other machine-specific constraints; that
> info is supposed to be encapsulated and hidden within busdma.  I think =
a
> driver making the assumption that it can do DMA safely on a buffer as
> long as that buffer is cacheline-granular is just as flawed as =
assuming
> that it can do DMA safely on any arbitrarily sized and aligned buffer.
>=20
> So the right way to "stick each of those pieces in a different cache
> line" is to allocate two different buffers, one per concurrent DMA
> transfer.  Or, really, to use two separate busdma mappings would be =
the
> more rigorous way to say it, since the mapping is the operation at =
which
> constraints come into play.  Thinking of it that way then drives the
> need to document that if multiple mappings describe the same area of
> physical memory, then concurrent operations on those maps yield
> unpredictable results.

Despite what I said earlier, I think this is sane.  busdma only should =
support one DMA active at a time into a buffer.  If the driver wants to =
do two different ones, they are on their own.

Warner=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?FBBEF2E7-B6E9-4F2D-A36E-7DE40666E50F>