Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 12 Mar 2000 18:23:52 -0500 (EST)
From:      Howard Leadmon <howardl@account.abs.net>
To:        Alfred Perlstein <bright@wintelcom.net>
Cc:        freebsd-hackers@FreeBSD.ORG
Subject:   Re: Buffer Problems and hangs in 4.0-CURRENT..
Message-ID:  <200003122323.SAA41606@account.abs.net>
In-Reply-To: <20000312132811.N14279@fw.wintelcom.net> from Alfred Perlstein at "Mar 12, 2000 01:28:11 pm"

next in thread | previous in thread | raw e-mail | index | archive | help

> > Copyright (c) 1992-2000 The FreeBSD Project.
> ...
> > real memory  = 402587648 (393152K bytes)
> > config> q
> > avail memory = 387334144 (378256K bytes)
> > Programming 24 pins in IOAPIC #0
> > IOAPIC #0 intpin 2 -> irq 0
> > FreeBSD/SMP: Multiprocessor motherboard
> >  cpu0 (BSP): apic id:  0, version: 0x00040011, at 0xfee00000
> >  cpu1 (AP):  apic id:  1, version: 0x00040011, at 0xfee00000
> >  io0 (APIC): apic id:  2, version: 0x00170011, at 0xfec00000
> > 
> > Did I miss anything important you need??
> 
> No that's fine, I run several machines with maxusers at 512 and
> NMBCLUSTERS at 32768 (although the ram is usually at 512 to a 1024),
> let me know if you have any problems with those settings though as
> I'd like to know if they are set too high for heavy load.
> 
> I would also suggest using fxp cards (Intel Ether Express Pro) in
> the future, they are definetly my favorite.
> 
> --
> -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]


I'll let you know how things pan out, but I just did a cvsup to the 
current code as of today, and also changed the MAXUSERS to 256, and
the NMBCLUSTERS to 20480 in the current kernel.

I did do a netstat -m on the box and see:

u2.abs.net$ netstat -m   
3798/4800/81920 mbufs in use (current/peak/max):
        2179 mbufs allocated to data
        1619 mbufs allocated to packet headers
784/1152/20480 mbuf clusters in use (current/peak/max)
2904 Kbytes allocated to network (70% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines


Though the box is not even close to full load, that I will see later on 
tonight when everyone gets on to chat.  I don't remember it ever having
81k mbufs available, so guess changing MAXUSERS set that one up.  Anyhow
I'll try and keep an eye on it, the real problem is that it just dies and
needs a reboot at times, so I don't get to see if I had any failures.. :(

As for the EEpro, I thought Intel had dropped that card, but maybe I am
wrong.  I used the DEC based cards as I had seen so many people raving
about them, and at least under Solaris they claim the DEC tulip based 
boards are the hot ticket.  Do you think the Intel board would really work
that much better under FBSD, as I can always try and find one to try out..



---
Howard Leadmon - howardl@abs.net - http://www.abs.net
ABSnet Internet Services - Phone: 410-361-8160 - FAX: 410-361-8162



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200003122323.SAA41606>