From owner-svn-src-user@FreeBSD.ORG Mon Nov 12 17:01:40 2012 Return-Path: Delivered-To: svn-src-user@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9CDA4DBD; Mon, 12 Nov 2012 17:01:40 +0000 (UTC) (envelope-from bright@mu.org) Received: from elvis.mu.org (elvis.mu.org [192.203.228.196]) by mx1.freebsd.org (Postfix) with ESMTP id 6FDF38FC13; Mon, 12 Nov 2012 17:01:40 +0000 (UTC) Received: from [10.0.1.17] (c-67-180-208-218.hsd1.ca.comcast.net [67.180.208.218]) by elvis.mu.org (Postfix) with ESMTPSA id 86F711A3CEB; Mon, 12 Nov 2012 09:01:39 -0800 (PST) References: <201211120847.qAC8lEAM086331@svn.freebsd.org> <50A0D420.4030106@freebsd.org> <0039CD42-C909-41D0-B0A7-7DFBC5B8D839@mu.org> <50A1206B.1000200@freebsd.org> In-Reply-To: <50A1206B.1000200@freebsd.org> Mime-Version: 1.0 (1.0) Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Message-Id: <3D373186-09E2-48BC-8451-E4439F99B29D@mu.org> X-Mailer: iPhone Mail (9B206) From: Alfred Perlstein Subject: Re: svn commit: r242910 - in user/andre/tcp_workqueue/sys: kern sys Date: Mon, 12 Nov 2012 09:01:37 -0800 To: Andre Oppermann Cc: "src-committers@freebsd.org" , "svn-src-user@freebsd.org" X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Nov 2012 17:01:40 -0000 I will take care of it then. Thank you.=20 Sent from my iPhone On Nov 12, 2012, at 8:14 AM, Andre Oppermann wrote: > On 12.11.2012 16:52, Alfred Perlstein wrote: >> If maxusers is set (loader.conf/config(8)) can you please revert to maxus= ers based limits? >=20 > No. That's way to complicated. >=20 > --=20 > Andre >=20 >> Sent from my iPhone >>=20 >> On Nov 12, 2012, at 2:49 AM, Andre Oppermann wrote: >>=20 >>> On 12.11.2012 09:47, Andre Oppermann wrote: >>>> Author: andre >>>> Date: Mon Nov 12 08:47:13 2012 >>>> New Revision: 242910 >>>> URL: http://svnweb.freebsd.org/changeset/base/242910 >>>>=20 >>>> Log: >>>> Base the mbuf related limits on the available physical memory or >>>> kernel memory, whichever is lower. >>>=20 >>> The commit message is a bit terse so I'm going to explain in more >>> detail: >>>=20 >>> The overall mbuf related memory limit must be set so that mbufs >>> (and clusters of various sizes) can't exhaust physical RAM or KVM. >>>=20 >>> I've chosen a limit of half the physical RAM or KVM (whichever is >>> lower) as the baseline. In any normal scenario we want to leave >>> at least half of the physmem/kvm for other kernel functions and >>> userspace to prevent it from swapping like hell. Via a tunable >>> it can be upped to at most 3/4 of physmem/kvm. >>>=20 >>> Out of the overall mbuf memory limit I've chosen 2K clusters, 4K >>> (page size) clusters to get 1/4 each because these are the most >>> heavily used mbuf sizes. 2K clusters are used for MTU 1500 ethernet >>> inbound packets. 4K clusters are used whenever possible for sends >>> on sockets and thus outbound packets. >>>=20 >>> The larger cluster sizes of 9K and 16K are limited to 1/6 of the >>> overall mbuf memory limit. Again, when jumbo MTU's are used these >>> large clusters will end up only on the inbound path. They are not >>> used on outbound, there it's still 4K. Yes, that will stay that >>> way because otherwise we run into lots of complications in the >>> stack. And it really isn't a problem, so don't make a scene. >>>=20 >>> Previously the normal mbufs (256B) weren't limited at all. This >>> is wrong as there are certain places in the kernel that on allocation >>> failure of clusters try to piece together their packet from smaller >>> mbufs. The mbuf limit is the number of all other mbuf sizes together >>> plus some more to allow for standalone mbufs (ACK for example) and >>> to send off a copy of a cluster. FYI: Every cluster eventually also >>> has an mbuf associated with it. >>>=20 >>> Unfortunately there isn't a way to set an overall limit for all >>> mbuf memory together as UMA doesn't support such a limiting. >>>=20 >>> Lets work out a few examples on sizing: >>>=20 >>> 1GB KVM: >>> 512MB limit for mbufs >>> 419,430 mbufs >>> 65,536 2K mbuf clusters >>> 32,768 4K mbuf clusters >>> 9,709 9K mbuf clusters >>> 5,461 16K mbuf clusters >>>=20 >>> 16GB RAM: >>> 8GB limit for mbufs >>> 33,554,432 mbufs >>> 1,048,576 2K mbuf clusters >>> 524,288 4K mbuf clusters >>> 155,344 9K mbuf clusters >>> 87,381 16K mbuf clusters >>>=20 >>> These defaults should be sufficient for event the most demanding >>> network loads. If you do run into these limits you probably know >>> exactly what you are doing and you are expected to tune those >>> values for your particular purpose. >>>=20 >>> There is a side-issue with maxfiles as it relates to the maximum >>> number of sockets that can be opened at the same time. With web >>> servers and proxy caches of these days there may be some 100K or >>> more sockets open. Hence I've divorced maxfiles from maxusers as >>> well. There is a relationship of maxfiles with the callout callwheel >>> though which has to be investigated some more to prevent ridiculous >>> values from being chosen. >>>=20 >>> -- >>> Andre >>>=20 >>=20 >>=20 >=20