Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jul 2015 08:10:29 -0700
From:      Sean Chittenden <seanc@groupon.com>
To:        Adrian Gschwend <ml-ktk@netlabs.org>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: FreeBSD 10.1 Memory Exhaustion
Message-ID:  <CACfj5vJvAz9StvjTrA1TzfS%2BMhi_qSrOc_qBNHr8qXbiAj81xw@mail.gmail.com>
In-Reply-To: <55A4E5AB.8060909@netlabs.org>
References:  <CAB2_NwCngPqFH4q-YZk00RO_aVF9JraeSsVX3xS0z5EV3YGa1Q@mail.gmail.com> <55A3A800.5060904@denninger.net> <55A4D5B7.2030603@freebsd.org> <55A4E5AB.8060909@netlabs.org>

next in thread | previous in thread | raw e-mail | index | archive | help
I think the reason this is not seen more often is because people frequently
throw limits on the arc in /boot/loader.conf:

vfs.zfs.arc_min="18G"
vfs.zfs.arc_max="149G"

ZFS ARC *should* not require those settings, but does currently for mixed
workloads (i.e. databases) in order to be "stable".  By setting fixed sizes
on the ARC, UMA and ARC are much more cooperative in that they have their
own memory regions to manage so this behavior is not seen as often.

To be clear, however, it should not be necessary to set parameters like
these in /boot/loader.conf in order to obtain consistent operational
behavior.  I'd be curious to know if someone running 10.2 BETA without
patches is able to trigger this behavior or not.  There was work done that
reported helped with this between 10.1 and now.  To what extent it helped,
however, I don't have any advice yet.

-sc



On Tue, Jul 14, 2015 at 3:34 AM, Adrian Gschwend <ml-ktk@netlabs.org> wrote:

> On 14.07.15 11:26, Matthew Seaman wrote:
>
>
> > On 07/13/15 12:58, Karl Denninger wrote:
> >> Put this on your box and see if the problem goes away.... :-)
>
> [...]
>
> > I know that you, Karl, and a number of others have been advocating to
> > get this patch set committed.  Having now personally run into the sort
> > of problems that this addresses I can say that I would very much like to
> > see this go in.  Conditional of course on this actually solving the
> > problems I and others have been experiencing without introducing
> > significant regressions elsewhere. It's only had a day's testing from me
> > so far, but it's looking good.  If it survives a week without the system
> > locking up, I'll be convinced.
>
> I was the one which posted the message last year which triggered Karl to
> analyze it as he saw similar issues:
>
> https://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019043.html
>
> https://lists.freebsd.org/pipermail/freebsd-fs/2014-March/019057.html
>
> Since then I run on Karls patch and never had any issue anymore. Not
> that my boxes were basically unusable without the patch.
>
> So I'm basically hoping since then that the patch will be committed soon.
>
> >    * The memory exhaustion effect or equivalent memory pressures can be
> >      triggered at will
> >    * The test doesn't require unfeasibly large resources to run
> >    * The behaviour provides a good model for real-world deployments
> >
> > Maybe these tests would be too large-scale to run every day in Jenkins,
> > but having them available as part of, say, the release process, seems
> > like a no-brainer to me.
>
> I wouldn't consider my setup as "unfeasibly large resources", in fact I
> triggered it with a bunch of jails running on a machine and providing
> various Internet-services for a small Open Source community. I was
> always surprised that not more people ran into this issue as I had it
> since 8.x.
>
> regards
>
> Adrian
>
>
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



-- 
Sean Chittenden



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CACfj5vJvAz9StvjTrA1TzfS%2BMhi_qSrOc_qBNHr8qXbiAj81xw>