Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jan 2004 06:45:55 +0200
From:      Vlad Galu <dudu@diaspar.rdsnet.ro>
To:        freebsd-net@freebsd.org
Subject:   Re: Handling 100.000 packets/sec or more
Message-ID:  <20040115064555.0efef38a.dudu@diaspar.rdsnet.ro>
In-Reply-To: <20040115054704.F20168@datacenter.office.suceava.rdsnet.ro>
References:  <Pine.WNT.4.58.0401141048001.2804@ady-home> <20040115054704.F20168@datacenter.office.suceava.rdsnet.ro>

next in thread | previous in thread | raw e-mail | index | archive | help
--Signature=_Thu__15_Jan_2004_06_45_55_+0200_t2l_BGWN2jjV0Fkt
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

Adrian Penisoara <ady@freebsd.ady.ro> writes:

|Hi again,
|
|  Thanks for all your answers.
|
|  A small comment though.
|
|Vlad Galu wrote:
|
|>	Try fxp. It has better polling support, and there's the
|>advantage of
|>the link0 flag. When it's set, the interface won't send interrupts to
|
| The man page sais that only some versions of the chipset supports this
|(microcode download). Do you (or anyone else) know the exact version(s)
|of the EtherExpress chip that supports this (and perhaps you have
|tried it) ?
|
| Oh well, looking at the source code it seems you can discern the
|enabled versions from here: sys/dev/fxp/rcvbundl.h (Intel source) and
|sys/dev/fxp/if_fxp.c (to the end of file).
|
| Resumed:
|
|   FXP_REV_82558_A4
|   FXP_REV_82558_B0
|   FXP_REV_82559_A0
|   FXP_REV_82559S_A
|   FXP_REV_82550
|   FXP_REV_82550_C
|
| Or by Intel revision codes:
|
|D101 A-step, D101 B-step, D101M (B-step only), D101S, D102 B-step,
|D102 B-step with TCO work around and D012 C-step.
|
|  I did not quite understand wether the embedded ICH3/4 network
|interfaces are also "link0" enabled.
|
|>the kernel for each packet it catches from the wire, but instead will
|>wait until its own buffer is full, and generate an interrupt
|>afterwards.
|>It should be a great deal of improvement when asociated with device
|>polling. As you surely know, when the kernel receives an interrupt
|from>an interface, it masks all further interrupts and schedules a
|polling>task instead.
|
|[...]
|
|>|  On a side note: what would be a adequate formula to calculate the
|>|NMBCLUSTERS and MBUFS we should set on this server (via boot-time
|>|kern.ipc.nmbclusters and kern.ipc.nmbufs) ?
|>|
|>
|>	I'm still thinking about that ...
|>
|
|  Did you come up with anything ?

		
	In the mbuf man page they say that a packet can span across multiple
mbuf structs. The mbuf memory is difided in mbuf clusters, each of them
of MCLBYTES size, which is 2048. OK, now try to allocate as many
NMBCLUSTERS you can, while reserving some memory for the userspace. If
you want to reserve, let's say 256 MB of KVM for this, you could then
have 131072 mbuf clusters. Scaled 4 times, this is 1073741824 - the
total number of mbufs available to the system. The larger this number,
the more packets your system can process.

	Hope this helps ...

|
|PS: Keep me in CC:. Thanks.
|
|-- 
|Adrian Penisoara
|Ady (@freebsd.ady.ro)
|
|On Wed, 14 Jan 2004, Adrian Penisoara wrote:
|
|> Hi,
|>
|>   At one site that I administer we have a gateway server which
|services> a large SOHO LAN (more than 300 stations) and I'm facing a
|serious> issue: very often we see strong spoofed floods (variable
|source IP and> port, variable destination IP, destination port 80)
|which can go as far> as 100 000 packets/sec!
|>
|>   Of course, the server (FreeBSD 5.2-REL, PIII 733Mhz, 256Mb RAM,
|3COM> 3C905B-TX aka xl0 with checksum offloading support) has a hard
|time> swallowing this kind of traffic. The main issue are the IRQ
|interrupts:> over 15000 interrupts/sec which consume more than 90% of
|the CPU time.> We got ingress filtering so the packets go no further
|than the firewall> (which, BTW, is not the issue, even disabling it
|it's the same problem).> The system is still responsive but the load
|average goes as high as 10> and the interface is losing packets (input
|errors) which dramatically> affects legitimate traffic, besides mbuf(9)
|starvation. We are taking> down the culprit clients, but this takes
|time and we need the other> clients not to be affected by it.
|>
|>   What can I do to make the system better handle this kind of traffic
|?> Could device polling(8) or just increasing the kernel frequency
|clock to> 1000Hz or more improve the situation ?
|>   What kind of network cards could face a lot better this burden ?
|Are> there any other solutions ?
|>
|>   On a side note: what would be a adequate formula to calculate the
|> NMBCLUSTERS and MBUFS we should set on this server (via boot-time
|> kern.ipc.nmbclusters and kern.ipc.nmbufs) ?
|>
|>  Thank you.
|>
|>
|
|_______________________________________________
|freebsd-net@freebsd.org mailing list
|http://lists.freebsd.org/mailman/listinfo/freebsd-net
|To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
|


----
If it's there, and you can see it, it's real.
If it's not there, and you can see it, it's virtual.
If it's there, and you can't see it, it's transparent.
If it's not there, and you can't see it, you erased it.

--Signature=_Thu__15_Jan_2004_06_45_55_+0200_t2l_BGWN2jjV0Fkt
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (FreeBSD)

iD8DBQFABhsFP5WtpVOrzpcRAkiaAKCYrRov9Rz2qRB02lNHjLXxWG1VoQCdGVL3
OYOEkrLpaEjyzzXWlFZhuEU=
=Ip2g
-----END PGP SIGNATURE-----

--Signature=_Thu__15_Jan_2004_06_45_55_+0200_t2l_BGWN2jjV0Fkt--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040115064555.0efef38a.dudu>