Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Apr 1999 14:44:24 -0700
From:      "David Schwartz" <davids@webmaster.com>
To:        "Daniel C. Sobral" <dcs@newsguy.com>
Cc:        <chat@FreeBSD.ORG>
Subject:   RE: swap-related problems
Message-ID:  <000101be8789$210927b0$021d85d1@whenever.youwant.to>
In-Reply-To: <37164F6C.52BEB501@newsguy.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> No, it doesn't make sense. Ok, you pre-allocate for some processes.
> The memory get filled. Then one of the processes using allocation on
> demand needs more memory. But there is no free memory, so a process
> gets killed.

	Huh? What memory?

> You are back where you started. Pre-allocation only works if you do
> it for all processes. And if you do that... Well, run the experiment
> I mentioned above.

	What the hell are you talking about?

	If, for the vital process, you always reserve enough swap space to allow it
to dirty all its pages, then that one process will never get killed because
of overcommitting.

> >         The problem is, there is no guarantee that I'll be able
> to use up to my
> > memory limit. So I don't see how this helps. An overcommit can
> still result
> > in a vital process being summarily terminated.
>
> Eh? What are talking about? No garantee?
>
> See, you have this process that you wants to eat as much memory as
> possible. If you are not pre-allocating for all processes, you will
> have to know which process is that. Knowing that, you set a memory
> limit that leaves you with enough memory to run the rest of the
> system.

	I see, so to make X work, you have to fix everything else? That doesn't
make a whole lot of sense. Besides, in a multi-user environment, resource
limits don't work that way. To ensure that a vital process doesn't get
killed, you'd have to make everyone else's memory limits way too low to be
usable.

> Now, this is a limit you *must* find for any solution you try,
> *except* pre-allocating everything. If you are not pre-allocating
> everything, you'll have to know how much memory you can use before
> processes get killed.

	No. You reserve swap for the vital process.

> For example, suppose one uses the "solution" you mention above.
> Memory gets filled, and then some other process might need more
> memory, and something get killed. There is no way around this. If
> you think there is a way around it, think again. If you still think
> so, try to explain, in details. It is unavoidable.

	I already did. You reserve enough swap to allow the vital process to dirty
all its pages. It really is that simple.

> Now, we know processes will get killed. So, next, you want to be
> able to tell which processes must not get killed. Well, things will
> get killed as long as memory is needed. So, you need to be running
> processes for the express purpose of them getting killed, freeing
> memory so the system can run without anything getting killed.
>
> Surprise, what you did is figure out how much free memory is needed
> for the system to run without anything getting killed. This is
> equivalent of figuring out how much memory to limit a process to.
>
> If you won't/can't find how much memory each process/user can safely
> use, your only solution is to pre-allocate everything.

	I'm sorry, I don't understand at all. Please explain why reserving enough
swap to allow the vital process to dirty all its pages is not a sufficient
solution.

	DS



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?000101be8789$210927b0$021d85d1>