Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 8 Jun 2018 11:16:42 -0700
From:      Gleb Smirnoff <>
To:        Kubilay Kocak <>
Subject:   Re: svn commit: r334819 - head/sys/vm
Message-ID:  <>
In-Reply-To: <>
References:  <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Fri, Jun 08, 2018 at 03:07:07PM +1000, Kubilay Kocak wrote:
K> >   UMA memory debugging enabled with INVARIANTS consists of two things:
K> >   trashing freed memory and checking that allocated memory is properly
K> >   trashed, and also of keeping a bitset of freed items. Trashing/checking
K> >   creates a lot of CPU cache poisoning, while keeping debugging bitsets
K> >   consistent creates a lot of contention on UMA zone lock(s). The performance
K> >   difference between INVARIANTS kernel and normal one is mostly attributed
K> >   to UMA debugging, rather than to all KASSERT checks in the kernel.
K> >   
K> >   Add loader tunable vm.debug.divisor that allows either to turn off UMA
K> Is 'sample interval' a standard/common enough term for this kind of
K> mechanism to name the sysctl with it rather than the implementation?
K> Or 'sample frequency'

Interval definitely doesn't fit here. Frequency is closer, but still not the
right term, IMHO. Native speaker required here to judge. I am okay if anyone
who is confident changes wording here.

Gleb Smirnoff

Want to link to this message? Use this URL: <>