Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Jan 2005 23:49:35 +0000 (UTC)
From:      Bernhard Schmidt <berni@birkenwald.de>
To:        freebsd-pf@freebsd.org
Subject:   Scalability of ALTQ
Message-ID:  <slrnctu80f.aet.berni@bschmidt.msgid.cybernet-ag.net>

next in thread | raw e-mail | index | archive | help
Hi,

we're currently using an old and overloaded packeteer packetshaper for
IP-over-Satellite deployment. Since this business will be expanded in
the next weeks to a level the packeteer cannot cope with anymore I'm
looking for alternatives in the unix-world, especially with pf and ALTQ
since it is impressively easy to configure and maintain.

There currently are four encapsulating units that have 30-40 Mbps IP
bandwidth each. Every unit has up to ten first-level customers with
about 20 second-level customers behind them. Every customer (first and
second level) has a commited rate and a burstable rate. There is no
oversubscribing in this business with regard to the commited rates,
while burstable may be used (up to a paid amount) as long as noone else
needs it.

The address ranges behind those encapsulators tend to be very large (/18
or bigger), with very many concurrent sessions. Due to the transmission
technique used the packet shapers cannot see the traffic coming back
from the customers, sometimes these packets don't even flow through our
site.

a) Is pf/kernel capable of that many queues? The only way I could think
   of copying that commited/burstable model into pf were two queues for
   each level, something like

   altq on fxp0 cbq bandwidth 40Mb queue { cust1, cust2, ... }

   queue cust1 on fxp0 bandwidth 5Mb cbq { cust1_commit }
   queue  cust1_commit on fxp0 bandwidth priority 2 10Mb cbq(borrow) { cust1_sub1, cust1_sub2 }
   queue   cust1_sub1 on fxp0 bandwidth 10Mb cbq { cust1_sub1_commit }
   queue    cust1_sub1_commit on fxp0 priority 2 bandwidth 2Mb cbq(borrow) 
   queue   cust1_sub2 on fxp0 bandwidth 0Mb cbq { cust1_sub2_commit }
   queue    cust1_sub2_commit on fxp0 priority 2 bandwidth 10Mb cbq(borrow)

   and so on, which should simulate the following ruleset

   Encapsulator 1 (40Mb max rate)
    Customer1 (10Mb commited + 5Mb burstable)
     Subcustomer 1 (2Mb commited + 10Mb burstable)
     Subcustomer 2 (10Mb commited (+ 0Mb burstable))

   and so on. Subcustomer1 could even have subcustomers on their own.
   For 200 subcustomers per encapsulation unit this makes more than 400
   queues per box, not talking about handling several encap. units on
   one box. And if one wants to use the nifty pf feature to use another
   queue for interactive traffic we're at twice the size. What about
   adding RED/ECN in this environment, adding additional need for
   resources (I guess).

   Another problem that might occur (I haven't tested it yet, so it is
   just speculation).... assuming the ruleset above, I guess packets
   "borrowing" from their parent class still get the attributes of their
   native class. With sub2 doing 10Mbps commited traffic and sub1 10Mbps
   (2Mbps commited + 8Mbps burst) there would be 20Mbps of traffic
   fighting for 15Mbps bandwidth of the cust1 queue. With all having
   prio 2, sub2 might be dropped below his 10Mbps commited rate.

   After reading up the manpage I believe hsfc could be the solution,
   with something like

   queue cust1 on fxp0 hsfc(realtime(10Mb), upperlimit(15Mb)) { cust1_sub1, cust1_sub2 }
   queue  cust1_sub1 on fxp0 hsfc(realtime(2Mb), upperlimit(12Mb))
   queue  cust1_sub2 on fxp0 hsfc(realtime(10Mb))

   would this help me? Any better ideas?

b) Is pf/kernel capable of handling that many states? I'm not able to
   access the packeteer at the moment, but I think we're way over 10000
   concurrent sessions.

Any idea whether pf would be able to scale at that level? I was thinking
about boxes in the P4 2GHz class, best would be if one could handle
at least two encapsulators.

Even better would be if pf could do this whole thing in a bridged
environment, but this might be too much to ask.

I'm glad to hear any experience with deployments of that size.

Bernhard



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?slrnctu80f.aet.berni>