Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 10 Jan 2005 15:07:03 +0100
From:      Max Laier <max@love2party.net>
To:        freebsd-pf@freebsd.org
Cc:        Bernhard Schmidt <berni@birkenwald.de>
Subject:   Re: Scalability of ALTQ
Message-ID:  <200501101507.10501.max@love2party.net>
In-Reply-To: <slrnctu80f.aet.berni@bschmidt.msgid.cybernet-ag.net>
References:  <slrnctu80f.aet.berni@bschmidt.msgid.cybernet-ag.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--nextPart2299320.XI0AfLurh9
Content-Type: text/plain;
  charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

On Saturday 08 January 2005 00:49, Bernhard Schmidt wrote:
> we're currently using an old and overloaded packeteer packetshaper for
> IP-over-Satellite deployment. Since this business will be expanded in
> the next weeks to a level the packeteer cannot cope with anymore I'm
> looking for alternatives in the unix-world, especially with pf and ALTQ
> since it is impressively easy to configure and maintain.
>
> There currently are four encapsulating units that have 30-40 Mbps IP
> bandwidth each. Every unit has up to ten first-level customers with
> about 20 second-level customers behind them. Every customer (first and
> second level) has a commited rate and a burstable rate. There is no
> oversubscribing in this business with regard to the commited rates,
> while burstable may be used (up to a paid amount) as long as noone else
> needs it.
>
> The address ranges behind those encapsulators tend to be very large (/18
> or bigger), with very many concurrent sessions. Due to the transmission
> technique used the packet shapers cannot see the traffic coming back
> from the customers, sometimes these packets don't even flow through our
> site.

Generally speaking, 30-40Mbps are no problem.  The limiting factor for pf (=
as=20
for any packet filter/firewall/etc.) is packets per second (pps).  In the e=
nd=20
there is no alternative to just try it.  In the worst case scenario (with 6=
4=20
byte per packet) this means about 625 kpps, which will certainly overload=20
most systems.  An *average* packet size of 400-800 byte/packet, however,=20
resulting in 50-100 kpps, should already be doable without problems.

> a) Is pf/kernel capable of that many queues? The only way I could think
>    of copying that commited/burstable model into pf were two queues for
>    each level, something like
>
>    altq on fxp0 cbq bandwidth 40Mb queue { cust1, cust2, ... }
>
>    queue cust1 on fxp0 bandwidth 5Mb cbq { cust1_commit }
>    queue  cust1_commit on fxp0 bandwidth priority 2 10Mb cbq(borrow) {
> cust1_sub1, cust1_sub2 } queue   cust1_sub1 on fxp0 bandwidth 10Mb cbq {
> cust1_sub1_commit } queue    cust1_sub1_commit on fxp0 priority 2 bandwid=
th
> 2Mb cbq(borrow) queue   cust1_sub2 on fxp0 bandwidth 0Mb cbq {
> cust1_sub2_commit } queue    cust1_sub2_commit on fxp0 priority 2 bandwid=
th
> 10Mb cbq(borrow)
>
>    and so on, which should simulate the following ruleset
>
>    Encapsulator 1 (40Mb max rate)
>     Customer1 (10Mb commited + 5Mb burstable)
>      Subcustomer 1 (2Mb commited + 10Mb burstable)
>      Subcustomer 2 (10Mb commited (+ 0Mb burstable))
>
>    and so on. Subcustomer1 could even have subcustomers on their own.
>    For 200 subcustomers per encapsulation unit this makes more than 400
>    queues per box, not talking about handling several encap. units on
>    one box. And if one wants to use the nifty pf feature to use another
>    queue for interactive traffic we're at twice the size. What about
>    adding RED/ECN in this environment, adding additional need for
>    resources (I guess).
>
>    Another problem that might occur (I haven't tested it yet, so it is
>    just speculation).... assuming the ruleset above, I guess packets
>    "borrowing" from their parent class still get the attributes of their
>    native class. With sub2 doing 10Mbps commited traffic and sub1 10Mbps
>    (2Mbps commited + 8Mbps burst) there would be 20Mbps of traffic
>    fighting for 15Mbps bandwidth of the cust1 queue. With all having
>    prio 2, sub2 might be dropped below his 10Mbps commited rate.
>
>    After reading up the manpage I believe hsfc could be the solution,
>    with something like
>
>    queue cust1 on fxp0 hsfc(realtime(10Mb), upperlimit(15Mb)) { cust1_sub=
1,
> cust1_sub2 } queue  cust1_sub1 on fxp0 hsfc(realtime(2Mb),
> upperlimit(12Mb)) queue  cust1_sub2 on fxp0 hsfc(realtime(10Mb))
>
>    would this help me? Any better ideas?

=46rom a very first glance, I think HSFC is what best suits your applicatio=
n. =20
Here again, you must make sure not to overload your parent with the client=
=20
bandwidth.

> b) Is pf/kernel capable of handling that many states? I'm not able to
>    access the packeteer at the moment, but I think we're way over 10000
>    concurrent sessions.

10'000 states are no problem.  In general states are a lot cheaper than act=
ual=20
ruleset evaluation.  Depending on your setup it might be a good idea to=20
switch to if-bound states.  This will generate even more states, but will=20
reduce the search time.  For reference, a state entry is just 256 Byte whic=
h=20
should make it obvious that you can have a lot of states before you hit a=20
limit.  States are organized in a red-black tree which provides an upper=20
limit of O(lg(N)) for the state search, where N is the number of states (pe=
r=20
interface, if you go with if-bound states).

> Any idea whether pf would be able to scale at that level? I was thinking
> about boxes in the P4 2GHz class, best would be if one could handle
> at least two encapsulators.

On that kind of hardware, you can do quite a few pps - depending on the=20
network cards and system bus organization, of course.

> Even better would be if pf could do this whole thing in a bridged
> environment, but this might be too much to ask.

Bridge support is currently broken in FreeBSD.  This might be fixed once we=
=20
import if_bridge from Open/NetBSD, but right now it just does not work.  Yo=
u=20
might want to try OpenBSD instead, which has full bridge support.

> I'm glad to hear any experience with deployments of that size.

Sorry, I can't provide such.

=2D-=20
/"\  Best regards,                      | mlaier@freebsd.org
\ /  Max Laier                          | ICQ #67774661
 X   http://pf4freebsd.love2party.net/  | mlaier@EFnet
/ \  ASCII Ribbon Campaign              | Against HTML Mail and News

--nextPart2299320.XI0AfLurh9
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (FreeBSD)

iD4DBQBB4owOXyyEoT62BG0RAhaSAJ4x/nB2KVPlICkNKHMDd99Y3WPa9ACYhzuG
53hZt9gfnOrsFiWFqAWnOQ==
=8W92
-----END PGP SIGNATURE-----

--nextPart2299320.XI0AfLurh9--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200501101507.10501.max>