Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 2 Oct 2011 17:45:45 +0400
From:      Lev Serebryakov <lev@FreeBSD.org>
To:        Davide Italiano <davide.italiano@gmail.com>
Cc:        freebsd-hackers@freebsd.org, lev@freebsd.org
Subject:   Re: Memory allocation in kernel -- what to use in which situation? What is the best for page-sized allocations?
Message-ID:  <1393358703.20111002174545@serebryakov.spb.ru>
In-Reply-To: <CACYV=-FNM-3fcYzFGc9eFajdoBmG1E-rWo6tq-OwBefGPADywA@mail.gmail.com>
References:  <358651269.20111002162109@serebryakov.spb.ru> <CACYV=-FNM-3fcYzFGc9eFajdoBmG1E-rWo6tq-OwBefGPADywA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hello, Davide.
You wrote 2 =EE=EA=F2=FF=E1=F0=FF 2011 =E3., 16:57:48:

>> =A0 But what if I need to allocate a lot (say, 16K-32K) of page-sized
>> blocks? Not in one chunk, for sure, but in lifetime of my kernel
>> module. Which allocator should I use? It seems, the best one will be
>> very low-level only-page-sized allocator. Is here any in kernel?

> My 2cents:
> Everytime you request a certain amount of memory bigger than 4KB using
> kernel malloc(), it results in a direct call to uma_large_malloc().
> Right now, uma_large_malloc() calls kmem_malloc() (i.e. the memory is
> requested to the VM directly).
> This kind of approach has two main drawbacks:
> 1) it heavily fragments the kernel heap
> 2) when free() is called on these multipage chunks, it in turn calls
> uma_large_free(), which immediately calls the VM system to unmap and
> free the chunk of memory.  The unmapping requires a system-wide TLB
> shootdown, i.e. a global action by every processor in the system.

> I'm currently working supervised by alc@ to an intermediate layer that
> sits between UMA and the VM, which goal is satisfyinh efficiently
requests >> 4KB (so, the one you want considering you're asking for
> 16KB-32KB), but the work is in an early stage.
  I was not very clear here. I'm saying about page-sized blocks, but
 many of them. 16K-32K is not a size in bytes, but count of page-sized
 blocks my code needs :)

  BTW, I/O is often require big buffers, up to MAXPHYS (128KiB for
 now), do you mean, that any allocation of such memory has
 considerable performance penalties, especially on multi-core and
 multi-CPU systems?

--=20
// Black Lion AKA Lev Serebryakov <lev@FreeBSD.org>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1393358703.20111002174545>