Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 18 Nov 2018 14:15:56 -0800
From:      Matthew Macy <mat.macy@gmail.com>
To:        =?UTF-8?Q?T=C4=B3l_Coosemans?= <tijl@freebsd.org>
Cc:        src-committers <src-committers@freebsd.org>, svn-src-all@freebsd.org,  svn-src-head@freebsd.org
Subject:   Re: svn commit: r339618 - head/sys/compat/linuxkpi/common/include/linux
Message-ID:  <CAPrugNpVnrAL=-8ernYc6A31-Gbr7SYS5j=ez4%2BEUfESYveidA@mail.gmail.com>
In-Reply-To: <20181118220842.4c995b5a@kalimero.tijl.coosemans.org>
References:  <201810222055.w9MKtZPt013627@repo.freebsd.org> <CAPrugNoMVDyg-CnVh5NUgrxdaHcbo3CibC3RsPQ7FVtdJ=FJdQ@mail.gmail.com> <CAPrugNq3HXvPccFzRxFK79o2fK3KzCzpn7BzEim9MyXxGnJ%2Bpw@mail.gmail.com> <20181118114538.546a4fab@kalimero.tijl.coosemans.org> <CAPrugNqfGGbXBx8TsU4KYR=nk5RzQRNAvjTURfvEmRda8j0ghA@mail.gmail.com> <20181118220842.4c995b5a@kalimero.tijl.coosemans.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Correct. This is just the generic case. We just need to define the __io
macros as __compiler_membar in x86/io.h

Cheers.
-M

On Sun, Nov 18, 2018 at 13:08 T=C4=B3l Coosemans <tijl@freebsd.org> wrote:

> On Sun, 18 Nov 2018 12:10:25 -0800 Matthew Macy <mat.macy@gmail.com>
> wrote:
> >> Note that these functions are normally used on uncacheable memory whic=
h
> >> is strongly ordered on x86.  There should be no reordering at all.  On
> >> PowerPC barrier instructions are needed to prevent reordering.
> >
> > Correct. The current lkpi implementation also assumes that device
> > endian =3D=3D host endian. The Linux generic accessors will do use endi=
an
> > macros to byte swap where necessary.
>
> Yes, these functions are used to access little-endian registers so byte
> swapping is needed on big-endian machines.  For PowerPC Linux also define=
s
> functions to access big-endian registers, but we probably don't need thos=
e.
>
> > The following change fixes radeon attach issues:
> >
> https://github.com/POWER9BSD/freebsd/commit/be6c98f5c2e2ed9a4935ac5b67c46=
8b75f3b4457
>
> +/* prevent prefetching of coherent DMA data ahead of a dma-complete */
> +#ifndef __io_ar
> +#ifdef rmb
> +#define __io_ar()      rmb()
> +#else
> +#define __io_ar()      __compiler_membar();
> +#endif
> +#endif
> +
> +/* flush writes to coherent DMA data before possibly triggering a DMA
> read */
> +#ifndef __io_bw
> +#ifdef wmb
> +#define __io_bw()      wmb()
> +#else
> +#define __io_bw()      __compiler_membar();
> +#endif
> +#endif
>
> ...
>
>  static inline uint16_t
>  readw(const volatile void *addr)
>  {
>         uint16_t v;
>
> -       __compiler_membar();
> -       v =3D *(const volatile uint16_t *)addr;
> -       __compiler_membar();
> +       __io_br();
> +       v =3D le16toh(__raw_readw(addr));
> +       __io_ar();
>         return (v);
>  }
>
> For x86 rmb and wmb are defined as lfence and sfence instructions which
> shouldn't be necessary here.
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAPrugNpVnrAL=-8ernYc6A31-Gbr7SYS5j=ez4%2BEUfESYveidA>