Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 31 Jan 2008 14:23:36 -0300
From:      "Alexandre Biancalana" <biancalana@gmail.com>
To:        freebsd-net@freebsd.org
Subject:   Re: VLAN problems
Message-ID:  <8e10486b0801310923h6cce985bx4c3243de1b5b7ffd@mail.gmail.com>
In-Reply-To: <20080130171826.GE41095@hal.rescomp.berkeley.edu>
References:  <8e10486b0801290439y77568aeby6c6dbfbb5132f61d@mail.gmail.com> <479F4C3C.5070801@tomjudge.com> <200801301159.26641.antik@bsd.ee> <8e10486b0801300556o3dfcd25el3511b0f7845d2607@mail.gmail.com> <20080130171826.GE41095@hal.rescomp.berkeley.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 1/30/08, Christopher Cowart <ccowart@rescomp.berkeley.edu> wrote:
>
> Trunking is definitely what you want. I'm using it successfully
> between Cisco switches and FreeBSD in a number of places.
>
> Here's IOS:
> | interface GigabitEthernet1/0/8
> |  description dev-wireless-aux
> |  switchport trunk encapsulation dot1q
> |  switchport trunk native vlan 8
> |  switchport trunk allowed vlan 88,665,679
> |  switchport mode trunk
> |  spanning-tree bpduguard enable

Here is my IOS:

interface GigabitEthernet3/18
 description Novo FW01
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 2,11,16,20,200-205
 switchport mode trunk


>
> Here's rc.conf:
> | ifconfig_fxp1="up"
> | ifconfig_vlan88="inet 10.8.0.2 netmask 0xffffc000 vlan 88
> |     vlandev fxp1"
> | ifconfig_vlan88_alias0="inet 10.8.0.1 netmask 0xffffffff"
> | ifconfig_vlan665="inet 169.229.65.132 netmask 0xffffffc0 vlan 665
> |     vlandev fxp1"
> | ifconfig_vlan679="inet 169.229.79.132 netmask 0xffffff80 vlan 679
> |     vlandev fxp1"
>
> You may have already done so, but make sure your trunk is in dot1q mode.
> The default trunking protocol is a Cisco proprietary something, if I
> understand correctly.

My rc.conf is similar too...

But I think that I find the problem... I setup a test environment
similar to the production and to simulate the the traffic I'm using
netperf, here is the environment.

FW1 ---                           ----- M1
            |                          |
            --- cisco 4506 --
                                        |
                                        ----- M2

The FW1 is the gateway connected to cisco 4506 throught bce1 gigabit
interface, on top of bce1 are configured the vlan2 and vlan5, M1 is a
machine connected to vlan2 and M2 is a machine connected to vlan5.

I'm running pf to filter the traffic between vlan in FW1,

Here is the result when I run netperf from M5 connecting M2 netserver
with FW1 pf enabled:

# netperf -H 10.2.0.46 -p 1025
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.2.0.46
(10.2.0.46) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 65536  32768  32768    17.11       8.03

Here is the result when I run netperf from M5 connecting M2 netserver
with FW1 pf *disabled*:

# netperf -H 10.2.0.46 -p 1025
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.2.0.46
(10.2.0.46) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 65536  32768  32768    11.45      92.35

I would expect some slow down or latency by enable pf, but not have a
10 times slow down.

Any other idea ?

Is Max Laier subscribed -net ?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8e10486b0801310923h6cce985bx4c3243de1b5b7ffd>