Date: 02 Jan 2003 14:23:14 -0500 From: Lowell Gilbert <freebsd-questions-local@be-well.no-ip.com> To: freebsd-questions@freebsd.org Subject: Re: kern.maxfiles guidelines Message-ID: <44lm23mk4d.fsf@be-well.ilk.org> In-Reply-To: <003501c2b27c$975034a0$3c01010a@mwimpee> References: <003501c2b27c$975034a0$3c01010a@mwimpee>
next in thread | previous in thread | raw e-mail | index | archive | help
"Michael Wimpee" <mwimpee@nbusa.com> writes: > errors into the syslog. Newsgroup posts all seem to prescribe 'sysctl -w > kern.maxfiles=[big number]', but I haven't seen any guidelines for the > value of 'big'. Assume I get excited and do 'sysctl -w > kern.maxfiles=9999999999'. What will happen as I open more and more > files? Is there a formula for calculating good values of 'big' (eg, MB > RAM * SQL_MAX_CONNECTIONS * Pi)? Or do I just keep increasing it until > it's 'big enough'? Unless you have an a priori method of determining the most file handles that should ever be needed simultaneously, empirical methods are the best choice available -- and will do fine. > Increasing the value (which I've done) indeed fixes the problem, but > I've yet to see a rationale for the stated values people are using and > there *must* be a reason for the defaults (anybody know what it is?). It's a compromise between running out of file handles and wasting memory on the file table. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?44lm23mk4d.fsf>