Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Apr 2024 15:39:02 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 278569] DMA bounce pages are not freed when DMA tag is destroyed
Message-ID:  <bug-278569-227@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D278569

            Bug ID: 278569
           Summary: DMA bounce pages are not freed when DMA tag is
                    destroyed
           Product: Base System
           Version: CURRENT
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Only Me
          Priority: ---
         Component: kern
          Assignee: bugs@FreeBSD.org
          Reporter: dgorecki@sii.pl

Created attachment 250209
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=3D250209&action=
=3Dedit
Kernel module reproducing the issue

The bounce pages for allocated DMA maps are never freed properly and leak w=
hen
destroying DMA tags.

The kernel sometimes needs to allocate bounce pages for DMA maps. When
unloading the DMA maps with bounce pages those bounce pages are returned to=
 the
pool associated with DMA tag for later reuse when new maps are being loaded.
The kernel should free the pages when destroying the DMA tag, but this never
happens.

The issue is quite easy to reproduce by creating a kernel module, in which a
DMA tag is created and mapping the memory that does not meet the alignment
restrictions set in DMA tag. After this the map should be unloaded and DMA =
tag
destroyed. Doing this in a loop should soon crash the system.

The code allocating bounce pages can be seen in [1], when grepping the sour=
ce I
can see that this is the only place when malloc with M_BOUNCE is called, but
there is no corresponding free anywhere.

I'm attaching a kernel module source for easy reproduction of the issue. The
module creates a dma tag which requires 64 alignment, allocates memory alig=
ned
to 64 bytes and then immediately moves the pointer by 4 bytes to misalign it
and loads the DMA map with misaligned pointer. This forces the bounce page =
to
be created. Everything, including the DMA tag is then cleaned up. This
allocation/deallocation loop happens every second. When the module is loade=
d,
calling `vmstat -m | grep bounce` should show that M_BOUNCE malloc usage is
rising, even though everything should be freed properly. Additionally, one =
can
observe the bounce page malloc by using dtrace with one-liner `dtrace -n
'dtmalloc::bounce: {}'`. Dtrace should print malloc calls but no frees will=
 be
visible.

I've seen the issue on arm64 and amd64, but it should be present on every
hardware platform.

[1] https://cgit.freebsd.org/src/tree/sys/kern/subr_busdma_bounce.c#n293

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-278569-227>