Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Aug 2018 05:41:53 -0700
From:      John Kennedy <warlock@phouka.net>
To:        bob prohaska <fbsd@www.zefox.net>
Cc:        Mark Millard <marklmi@yahoo.com>, Mark Johnston <markj@freebsd.org>, freebsd-arm <freebsd-arm@freebsd.org>
Subject:   Re: RPI3 swap experiments (grace under pressure)
Message-ID:  <20180814124153.GF81324@phouka1.phouka.net>
In-Reply-To: <20180814014226.GA50013@www.zefox.net>
References:  <20180809175802.GA32974@www.zefox.net> <20180812173248.GA81324@phouka1.phouka.net> <20180812224021.GA46372@www.zefox.net> <B81E53A9-459E-4489-883B-24175B87D049@yahoo.com> <20180813021226.GA46750@www.zefox.net> <0D8B9A29-DD95-4FA3-8F7D-4B85A3BB54D7@yahoo.com> <FC0798A1-C805-4096-9EB1-15E3F854F729@yahoo.com> <20180813185350.GA47132@www.zefox.net> <FA3B8541-73E0-4796-B2AB-D55CE40B9654@yahoo.com> <20180814014226.GA50013@www.zefox.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Aug 13, 2018 at 06:42:26PM -0700, bob prohaska wrote:
> I understand that the RPi isn't a primary platform for FreeBSD.
> But, decent performance under overload seems like a universal
> problem that's always worth solving, ...

I don't think anything we're talking about explicitly rules out the RPI as
a platform, one way or the other, except in it's ability to soak up abuse.
I think what you're pitching is basically a scheduling change (with what tasks
get run, with a not insignificant trickle-down to how they're swapping).

I think the "general case" is "plan your load so everything runs in RAM" but
knowing that there is generally a 80/20 rule (don't get hung up on specific
numbers -- there is a line, I'm sure it'll move around) of memory that doesn't
need to stay resident to memory that does.  Oversubscription.  And as far as
the scheduler goes, it just ends up with a mess of dynamic needs.

Personally, I consider swap as a kludge and a type of overdraft protection.
I'm writing a bunch a bunch of checks and I hope they won't all get cached
(sorry, pun) at the same time but sometimes that is beyond my control.

> There's at least some degree of conflict between all of them, 
> made worse when the workload grows beyond the design assumptions.
> The RPI makes the issue more visible, but it's always lurking.

I think slow peripherals and lack of memory are the targets.  I'd never stick
my swap onto a something like a USB card if I didn't have to.

> OOMA seems to sacrifice getting work done, potentially entirely,
> in support of keeping the system responsive and under control.

I'm not a fan of OOM-death, but I think I understand the logic.  It would be
awesome if there was a scheduling algorithm that could balance everything
happily, but I think this tradeoff basically boils down to responsiveness
(is process A getting CPU time) against oversubscription of resources.  "The
beatings will continue until morale improves", except we're talking about
processes being taken out back and shot in the head.

Killing them seems extreme and somewhat arbitrary, but I'd quickly degenerate
into a list of what I'd prefer was killed and the code mess that might be to
try and implement it.  I can easily imagine scenarios with basically a swap
deadlock (no way to get a process into RAM to run in order to be "responsive")
and then have to make a decision:  Kill this thing that is basically hung or
let it stick around indefinitely?  And how many times do you do that before ALL
swap is exhausted?  We're not talking about checking your malloc() return code,
we're talking about not even being able to grow a stack to make that call.

So I can see myself with OOM-killings as a last-ditch defense against total
system failure (and knowing that any OOM-killing might end up as a not-failed
but useless system if a vital service is sacrificed).  We're just talking about
a knob to control some threshold where it becomes palatable.

I don't think there is enough info to make the kind of informed decision we'd
like the scheduler to make.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180814124153.GF81324>