Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 8 Nov 2000 18:17:14 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        dcs@newsguy.com (Daniel C. Sobral)
Cc:        tlambert@primenet.com (Terry Lambert), arch@FreeBSD.ORG
Subject:   Re: softdep panic due to blocked malloc (with traceback)
Message-ID:  <200011081817.LAA21138@usr08.primenet.com>
In-Reply-To: <3A09346F.7543C1DD@newsguy.com> from "Daniel C. Sobral" at Nov 08, 2000 08:09:35 PM

next in thread | previous in thread | raw e-mail | index | archive | help
> > I haven't seen an occurrance of one in nature (well, AIX) in at
> > least 5 years.
> 
> I did... :-( 
> And wished the damned application knew about the signal and stopped
> hogging memory.

???

It's my experience that if you don't trap the thing, you
terminate.  Did your application ignore the thing when you
didn't want it to, or did it terminate, when you didn't want
want it to?

If the former, I'd have to say that that was a very badly
behaved program indeed, if it trapped and ignored the
signal, since it's probably trapping and ignoring other
signals (all of them?) that it's too stupid to handle
properly.

If the latter, well, that's a consequence of running within
a memory overcommit architecture, without enough swap, and
no emergency measures, like the ones which are being put
forward by Poul and DES in this thread.

I think that, under no circumstances, should it be permited
to wedge a system, no matter how loaded it gets.  Wedging
individual processes is acceptable, IMO, until such time as
necessary resources become available.  This may include such
things as preallocation of trap/fault handling resources,
and blocking programs in the trap/fault handler, for an
indefinite period of time.  For example, an overcommit of a
data page, which gets a copy-on-write fault, when there are
no pages available to copy to.  If it didn't have extra cost
to sleep these, then sleeping them isn't a big deal.

I think a partial fix has to be the idea that some code ought
to be statically linked, have the sticky bit set, and making
the kernel honor the traditional sticky bit semantics.  I put
"init", "inetd", "/bin/login", "/bin/sh", "/bin/kill", and
similar programs needed to recover a wedged system into this
camp.


The whole idea of "sacrosanct" processes reall ignores the
idea that one of those processes could be a victim of a DOS
attack.  I really oppose something like "SIGDANGER" because
of this.  I think user space processes can be adequately
managed through administrative limits on resource consumption.

I think the real problem will be kernel space resource
utilization, where administrative limits aren't enforced,
either out of fear of deadlock, or for other less defensible
reasons.  The biggest things I would put in this category
are things that run in kernel threads, and so need access
to scheduler quantum, in order to run.

It seems to me that one approach would be to run these to
completion in the resource allocation handlers, rather than
in the background (the same way the Dynix allocator handles
freeing resources to the system, and doing page coelesces).


For things that absolutely have to run in the background, and
so are a kernel thread or a kernel process or whatever you
want to call it, I think that the answer might be to change
the scheduler to support multiple scheduling classes, and
then put these things into a fixed scheduling class.  This
would guarantee that they were alotted quanta, up to a certain
fixed percentage of available quanta, if they needed it.  It
would let them run things like I/O to completetion.  UnixWare
has successfully used a similar approach, since UnixWare 2.0
(1994), with moderately good results.


I also think that working set quotas for any resource are
probably a good idea, since it puts the ofeender into its own
competition domain, where it can only thrash itself, and no
one else notices it.  A good example of this would be "TCP/IP
connection requests from w.x.y.z", where you limit the allowed
number of outstanding requests based on the requester, rather
than trying to do it globally.  This still leaves you open to
attacks from multiple source addresses simultaneously, but
that needs to be handled anyway, and probably can't be handled
at a global resource utilization level, and still permit
legitimate traffic.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200011081817.LAA21138>