Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Jul 2011 15:42:37 -0400
From:      Michael MacLeod <mikemacleod@gmail.com>
To:        Julian Elischer <julian@freebsd.org>
Cc:        freebsd-net@freebsd.org
Subject:   Re: Bridging Two Tunnel Interfaces For ALTQ
Message-ID:  <CAM-FeoH3fJbbtNFFQqAAg=FSxKPoD6yuTx%2BpJ8B7ducxV_AcJw@mail.gmail.com>
In-Reply-To: <4E0ED1BA.40509@freebsd.org>
References:  <BANLkTim9SoHQBrGOF2vN8bUzGR6kvreiS4UYKCGh7atHg72q2w@mail.gmail.com> <4E0D593B.7090206@freebsd.org> <BANLkTik5J4NBb05KtDAMTYPwFNC8WUZSz7o33XDRh51P_NL2DQ@mail.gmail.com> <4E0ED1BA.40509@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
I merged both of your responses below, so that the thread doesn't get
fragmented.

On Sat, Jul 2, 2011 at 4:07 AM, Julian Elischer <julian@freebsd.org> wrote:

> **
> On 7/1/11 12:59 AM, Michael MacLeod wrote:
>
> On Fri, Jul 1, 2011 at 1:20 AM, Julian Elischer <julian@freebsd.org>wrote:
>
>>  On 6/29/11 11:28 AM, Michael MacLeod wrote:
>>
>>> I use pf+ALTQ to achieve some pretty decent traffic shaping results at
>>> home.
>>> However, recently signed up to be part of an IPv6 trial with my ISP, and
>>> they've given me a second (dual-stacked) PPPoE login with which to test
>>> with. The problem is that the second login lacks my static IP or my
>>> routed
>>> /29. I can have both tunnels up simultaneously, but that becomes a pain
>>> to
>>> traffic shape since I can't have them both assigned to the same ALTQ.
>>>
>>> ... unless there is some way for me to turn the ng interfaces (I'm using
>>> mpd5) into ethernet interfaces that could be assigned to an if_bridge. I
>>> could easily disable IPv4 on the IPv6 tunnel, which would clean up any
>>> routing issues, assign both tunnels to the bridge, and put the ALTQ on
>>> the
>>> bridge. It just might have the effect I'm looking for. Bonus points if
>>> the
>>> solution can be extended to allow it to work with a gif tunnel as well,
>>> so
>>> that users of 6in4 tunnels could use it (my ISPs IPv6 beta won't let me
>>> do
>>> rDNS delegation, so I might want to try a tunnel from he.net instead).
>>>
>>> I spent some time this morning trying to make netgraph do this with the
>>> two
>>> ng interfaces, but didn't have any luck. Google didn't turn up anyone
>>> trying
>>> to do anything similar that I could find; closest I got was this:
>>> http://lists.freebsd.org/pipermail/freebsd-net/2004-November/005598.html
>>>
>>> This is all assuming that the best way to use ALTQ on multiple outbound
>>> connections is with a bridge. If there is another or more elegant
>>> solution,
>>> I'd love to hear it.
>>>
>>
>>  rather than trying to shoehorn ng into if_bridge, why not use the
>> netgraph bridge itility,
>> or maybe one of the many other netgraph nodes that can split traffic.
>> fofr example the ng_bpf filter can filter traffic on an almost arbitrary
>> manner that you program using
>> the bpf filter language.
>
>
>  Julian, thanks for responding. I'm not particularly concerned about how I
> accomplish my goal, so long as I can accomplish it. I was thinking about
> using if_bridge or ng_bridge because I have past experience with software
> bridges in BSD and linux. Unfortunately, ng_bridge requires a node that has
> an ether hook. I spent a bit of time looking at the mpd5 documentation, and
> there's actually a config option to have mpd generate an extra tee node
> between the ppp and the iface nodes. These nodes are connected together
> using inet hooks. If I could find a netgraph node that can take inet in one
> side and ether on the other, I believe I'd be set.
>
> I think you need to draw a diagram..
>

Alright, here's how the mpd daemon puts together a PPPoE interface by
default:

iface node (ng0)<-->ppp node<-->tee node<-->pppoe node<-->ether node(em1)

Here's how it looks if I enable the option to add another tee:

iface node (ng0)<-->tee node<-->ppp node<-->tee node<-->pppoe node<-->ether
node(em1)

The iface and ppp nodes are connected with inet hooks, which I believe means
that they are straight IP packets, with no PPP or other layer 2 framing
remaining (though I could be totally wrong about that).


> The nice thing (near as I can tell) about using ethernet based nodes would
> be that pretty much everything can talk to an ethernet interface (tcpdump,
> etc) and that ethernet should be fairly easy to fake; just assign a fake MAC
> to the ether nodes (which is what the ng_ether node does, pretty much) and
> the bridge will take care of making sure traffic for tunnel 0 doesn't go to
> tunnel 1, etc.
>
>  I haven't read up very much about ng_bpf yet, but it seems like a pretty
> heavy tool for the job, and wouldn't the data have to enter userspace for
> parsing by the bpf script?
>
> no you download the filter program into the kernel module to program it.
>

Ah, okay.

  Also, I've never written anything in bpf. It's not a huge hurdle, I hope,
> but it's certainly more involved than a six line ngctl incantation that
> turns my iface nodes into eiface nodes suitable for bridging.
>
> read the ng_bpf man page and the tcpdump man page.
> Having said that you may find many other ways to split traffic.
>
> actually you can do that in 1 ngctl command..
> I think you want the ng_eiface module. but I'm not sure...ngeiface presents
> an interface in ifconfig and
> produces ethernet frames which can be fed into the ng_bridge node teh
> output of which can be fed into a real ethernet bottom end.
>

I already tried linking an eiface node to the tee interface I described
above, between the iface and ppp nodes. I ran some traffic through the
interface but didn't see any of it appear on the ngeth0 interface (I was
watching it with tcpdump). According to the man page, ng_eiface nodes should
be connected to the Ethernet downstream from another node, like the ng_vlan
node. So I suspect that ng_eiface expects the packets received to already
have ethernet framing. It just created an interface that man be used with
ifconfig and the rest of the system.

>  As I said, I'm not particularly concerned with the means, just the end
> itself really. If there were an elegant way to create a virtual ALTQ that I
> could then build sub-queues that were actually attached to the tunnels in pf
> that would also satisfy my end goal, without any netgraph mucking at all. I
> just haven't found any evidence that ALTQ has any ability to do that.
>
>  I just have two tunnels, one using IPv4 and one using IPv6, that share
> the same bandwidth resource. I want a way to shape traffic based on the pool
> of bandwidth, not the tunnels running through the pool.
>
> not quite sure what you mean by that,,
>
> an example would help.
>

I have two phone lines with DSL, and they both sync at 5000/768kbps. These
each have a DSL modem in bridge mode, and are wired into em1 and em2 on my
router. My ISP supports Multilink PPP, so I have a theoretical pipe of
10000/1536kbps. After network overhead, etc, I usually set my the upstream
ALTQ to 1200kbps. My ISP has also given me two separate PPPoE logins, one
for my regular IPv6 traffic, and one beta account for testing dual-stack
IPv4/IPv6 (I can easily disable the IPv4 on this login, and use it as only
IPv6). But both of these tunnels share the same pool of bandwidth. If I
upload a file to my colo at 800kbps over IPv6, there will be 400kbps of
bandwidth left for other outbound traffic, regardless of which tunnel it
uses. The underlying DSL will only give me so much bandwidth.

Because FreeBSD sees these as two distinct interfaces, I can't assign a
single ALTQ to them. I could create two ALTQs, and assign 800kbps to the
IPv4 one, and 400kbps to the IPv6 one, and share it that way, but that is
pretty much guaranteed to not be the most efficient method. So I figured
that if I could create a software bridge and assign both tunnels to the
bridge, then I could assign the ALTQ to the bridge instead. My ALTQ and pf
rules could be altered to treat the bridge as my "WAN", and once the traffic
entered the bridge the routing table would take care of it (since all IPv4
traffic will go out one tunnel and all IPv6 traffic would go out the other).
I may be completely wrong about this, but that was the direction I was
thinking.

Cheers,
Mike



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAM-FeoH3fJbbtNFFQqAAg=FSxKPoD6yuTx%2BpJ8B7ducxV_AcJw>