Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 6 Apr 2017 01:06:42 -0500
From:      Jim Thompson <jim@netgate.com>
To:        Russell Haley <russ.haley@gmail.com>
Cc:        Norman Gray <norman@astro.gla.ac.uk>,  "freebsd-arm@freebsd.org" <freebsd-arm@freebsd.org>
Subject:   Re: SoC with multiple ethernet ports
Message-ID:  <CAKAfi7zn1H=M3iaL1ASR4w7LU=HBFnNmn1OxF=B5QbLdKygnXQ@mail.gmail.com>
In-Reply-To: <20170406051837.5902420.12766.23772@gmail.com>
References:  <12078642-77BD-4B96-87F0-4B777EABA252@astro.gla.ac.uk> <CAKAfi7zDPo0f%2BJXWQfm07FDy3a%2BOrsVCgzdV-z55LCBKFr06gg@mail.gmail.com> <20170406051837.5902420.12766.23772@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Apr 6, 2017 at 12:18 AM, Russell Haley <russ.haley@gmail.com> wrote=
:

> Sorry for the top post.
>
> We got a Solid Run clear fog at work and put openwrt on it.


=E2=80=8BMarvell provides an OpenWRT port for the board, yes.
=E2=80=8B

> I was personally lamenting that I wouldn't be able to put freebsd on it s=
o
> this is fantastic news. The mavell switching chip is FAST (the engineers
> response when I asked for details).



> We've created a 14 port switch (2x1gbps or 12x100mbps=E2=80=8E I think he=
 said?)
> by lashing two of the switching chips and a single armada cpu.


=E2=80=8BOne of the things we did on the Netgate hardware (but for my post =
here,
it's otherwise unannounced) is to connect a Marvell 88E6141 switch at
2.5Gbps to the SERDES that can do 2.5Gbps Ethernet.  We use the other two
MACs on the SoC at 1Gbps each.  This creates a system with a 4-port 1Gbps
switch for "LAN" and 2 "WAN" ports, but, with some luck, we can forward at
2 Gbps, either between 2 pairs of the 4 port LAN (running through the SoC)
or 2 ports on the LAN and both WAN ports.

We're planning netmap support for the ethernets on this platform as well.
netmap-fwd is still a thing, and Suricata has an inline mode that works via
netmap.

The expressoBIN board uses the same 88E6141 switch attached to the Armada
3700 SoC to make the device a '3 port' platform.  The 3700 is otherwise a 2
MAC part.

ClearFog Pro and Turris Omnia use an 88E6176, which has no 2.5Gbps SERDES.
The Turris board does connect two of the 38x MACs to this switch.  ClearFog
Pro connects one, and leaves other two for WAN, where one port is on SFP,
and the other on RJ45.=E2=80=8B


> Sorry I don't have the model info of the chips handy. If I understand
> correctly all the layer 2 stuff happens in hardware and only packet
> filtering and advanced rules/inspection are sent to the cpu?
>

=E2=80=8BThe switch can do a lot, but a lot of the details are under NDA.
=E2=80=8B

> Great news. Looking forward to hearing when you have something going Jim.
>

=E2=80=8BSure thing.

Jim
=E2=80=8B

> Russ
>
> Sent from my BlackBerry 10 smartphone on the Virgin Mobile network.
>   Original Message
> From: Jim Thompson
> Sent: Wednesday, April 5, 2017 2:49 PM
> To: Norman Gray; freebsd-arm@freebsd.org
> Subject: Re: SoC with multiple ethernet ports
>
> On Wed, Apr 5, 2017 at 10:26 AM, Norman Gray <norman@astro.gla.ac.uk>
> wrote:
>
> >
> > Greetings.
> >
> > I'm looking for a SoC (or other small) machine to act as a gateway
> between
> > two physical networks (in fact, an ordinary one and an IPMI). Thus I'm
> > looking for such a device which has at least two ethernet interfaces, a=
nd
> > can run FreeBSD.
> >
>
> We (at Netgate) do a lot of work with ADI Engineering (now a division of
> Silicom). We sell tens of thousands of Intel-based systems per year, all
> with pfSense on them.
>
> Around 18 months ago, Steve Yates (the President of ADI) and I were
> bemoaning the lack of a truly open source (hw and sw) BMC solution in the
> industry. The ASpeed stuff you find on so many boards is fragile,
> expensive, and ... linux.
>
> So, for the BCC platform, we decided to embark on an ambitious project,
> which became known as 'microBMC' or uBMC.
> http://www.adiengineering.com/products/bcc-ve-board/
> http://www.adiengineering.com/products/microbmc/
>
> One of the ideas for uBMC was that it would leverage the on-die 3 port
> switch on the TI AM335x SoC. We picked the TI SoC *because* it already ha=
d
> decent support for FreeBSD (because of the Beaglebone series of boards.)
> That said, the support for that SoC wasn't as good as you might imagine.
> It wasn't "product" grade. We did quite a bit of work to what existed in
> FreeBSD, to get to a releasable product, then upstreamed it all.
>
> The whole point of the story up to this point is: "uBMC was desided to do
> exactly what you asked for: to sit between an IPMI port and another
> network."
>
> pfSense gets frequently deployed on small hardware (e.g. PC Engines) for
> this use case, and, of course, pfSense is based on FreeBSD.
> =E2=80=8B
> With this in-mind, and knowing that the software was coming together, one
> day, about a year ago, I asked Steve how difficult it would be to put the
> PHYs, magnetics and RJ45s on uBMC.
> There's more to it than that, of course, but that was the high-level
> concept. The result is uFW:
> http://www.adiengineering.com/products/micro-firewall/
> which we sell as the "sg-1000":
> https://www.netgate.com/products/sg-1000.html
>
> If your volumes are high enough, you can purchase directly from ADI. If
> you want one, they'll send you to us.
>
> Full support for uBMC and uFW is upstreamed in the FreeBSD tree, mostly d=
ue
> to the efforts of Luiz Otavio O Souza (loos@). In particular, a lot of
> work was done to the cspw(4) driver to make it both more reliable and
> better performing.
> Case in-point, with *just* FreeBSD on the board, no packet fitlering, (an=
d
> thus no NAT, etc), and using ptkg-en for source and sink, you can forward
> IPv4 traffic between the two ports at over 500Mbps.
>
> pkt-gen no firewall is ~580Mb/s:
> 313.994151 main_thread [2019] 45.851 Kpps (48.763 Kpkts 585.156 Mbps in
> 1063505 usec) 5.47 avg_batch 1015 min_space
> 315.057147 main_thread [2019] 45.838 Kpps (48.726 Kpkts 584.712 Mbps in
> 1062996 usec) 5.45 avg_batch 1015 min_space
> 316.092160 main_thread [2019] 45.854 Kpps (47.459 Kpkts 569.508 Mbps in
> 1035013 usec) 5.47 avg_batch 1015 min_space
> 317.116221 main_thread [2019] 45.838 Kpps (46.941 Kpkts 563.292 Mbps in
> 1024062 usec) 5.48 avg_batch 1015 min_space
> 318.140208 main_thread [2019] 45.846 Kpps (46.946 Kpkts 563.352 Mbps in
> 1023987 usec) 5.44 avg_batch 1015 min_space
> 319.203146 main_thread [2019] 45.831 Kpps (48.715 Kpkts 584.580 Mbps in
> 1062937 usec) 5.45 avg_batch 1015 min_space
> 320.266145 main_thread [2019] 45.827 Kpps (48.714 Kpkts 584.568 Mbps in
> 1063000 usec) 5.47 avg_batch 1015 min_space
> 321.329146 main_thread [2019] 45.842 Kpps (48.730 Kpkts 584.760 Mbps in
> 1063001 usec) 5.43 avg_batch 1015 min_space
> 322.392147 main_thread [2019] 45.845 Kpps (48.733 Kpkts 584.796 Mbps in
> 1063000 usec) 5.48 avg_batch 1015 min_space
> 323.455147 main_thread [2019] 45.850 Kpps (48.739 Kpkts 584.868 Mbps in
> 1063000 usec) 5.46 avg_batch 1015 min_space
> 324.509646 main_thread [2019] 45.850 Kpps (48.349 Kpkts 580.188 Mbps in
> 1054500 usec) 5.45 avg_batch 1015 min_space
>
> with pf: (one rule, basically "pass all")
> 498.389631 main_thread [2019] 27.494 Kpps (27.549 Kpkts 330.588 Mbps in
> 1002000 usec) 3.42 avg_batch 1019 min_space
> 499.391631 main_thread [2019] 27.513 Kpps (27.568 Kpkts 330.816 Mbps in
> 1002000 usec) 3.45 avg_batch 1019 min_space
> 500.393640 main_thread [2019] 27.503 Kpps (27.558 Kpkts 330.696 Mbps in
> 1002008 usec) 3.46 avg_batch 1019 min_space
> 501.419083 main_thread [2019] 27.502 Kpps (28.202 Kpkts 338.424 Mbps in
> 1025443 usec) 3.44 avg_batch 1019 min_space
> 502.419632 main_thread [2019] 27.509 Kpps (27.524 Kpkts 330.288 Mbps in
> 1000549 usec) 3.44 avg_batch 1019 min_space
> 503.420638 main_thread [2019] 27.545 Kpps (27.573 Kpkts 330.876 Mbps in
> 1001006 usec) 3.45 avg_batch 1019 min_space
> 504.430635 main_thread [2019] 27.530 Kpps (27.805 Kpkts 333.660 Mbps in
> 1009998 usec) 3.44 avg_batch 1019 min_space
>
> and with ipfw: (one rule, basically "pass all")
> 597.124126 main_thread [2019] 37.585 Kpps (39.953 Kpkts 479.436 Mbps in
> 1062999 usec) 4.61 avg_batch 1017 min_space
> 598.186628 main_thread [2019] 37.587 Kpps (39.936 Kpkts 479.232 Mbps in
> 1062502 usec) 4.60 avg_batch 1017 min_space
> 599.250127 main_thread [2019] 37.589 Kpps (39.976 Kpkts 479.712 Mbps in
> 1063500 usec) 4.60 avg_batch 1017 min_space
> 600.251626 main_thread [2019] 37.583 Kpps (37.639 Kpkts 451.668 Mbps in
> 1001498 usec) 4.62 avg_batch 1017 min_space
> 601.313294 main_thread [2019] 37.573 Kpps (39.890 Kpkts 478.680 Mbps in
> 1061669 usec) 4.60 avg_batch 1017 min_space
> 602.359629 main_thread [2019] 37.600 Kpps (39.342 Kpkts 472.104 Mbps in
> 1046334 usec) 4.63 avg_batch 1017 min_space
>
> Using iperf3, TCP, no firewall:
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 1.19 GBytes 341 Mbits/sec 500 sender
> [ 4] 0.00-30.00 sec 1.19 GBytes 341 Mbits/sec
> receiver
>
> ipfw: (one rule)
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 1.01 GBytes 291 Mbits/sec 381 sender
> [ 4] 0.00-30.00 sec 1.01 GBytes 290 Mbits/sec
> receiver
>
> and pf: (one rule)
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 731 MBytes 204 Mbits/sec 361 sender
> [ 4] 0.00-30.00 sec 730 MBytes 204 Mbits/sec
> receiver
>
> I think this is pretty remarkable on a single core 550-600MHz ARM7 SoC.
> The performance of *pfSense* on the board isn't this high, because pfSens=
e
> ships with a much more complex default ruleset. (If you configure pfSense
> to be the same "no filtering", then the performance is, indeed, as above.=
)
>
> So there is one solution that you can get today.
>
> =E2=80=8BSince the Marvell Armada 38x was also mentioned (both in this th=
read and
> in the previous thread on the same subject), I'll let you know that, due
> largely to the efforts of Semihalf and Stormshield,
> with only tiny contributions from Netgate, full support for the Solid-Run
> boards should appear in FreeBSD soon.
>
> Stormshield and Netgate each have products coming that are somewhat
> different than the Solid-Run ClearFog boards, but the Solid-Run ClearFog
> boards are a target as well.
>
> A bootlog of -HEAD booting on a Solid-Run ClearFog Pro last week can be
> found here:
> https://gist.github.com/gonzopancho/2b0fb7c91eca140638b9953709b4dc4b
>
> =E2=80=8BAll of the above is 32-bit ARM. The future is 64-bit ARM.=E2=80=
=8B
>
> =E2=80=8BWe're pretty engaged with both Marvell and NXP for their arm64 e=
fforts.
> There are several interesting, low-cost boards based on SoCs from both NX=
P
> and Marvell.
>
> Marvell has the "expressoBIN" board, which is just under $50. It runs an
> Armada 3700 and a small Marvell switch to turn the two 1Gbps MACs on the
> 3700 into 3 1Gbps ports.
>
> NXP has the "FreedomFRDM-LS1012A: QorIQ " board, which is typically price=
d
> online just above $50. It runs a single-core LS1012A SoC, and has 2 x
> 1Gbps Ethernets.
> =E2=80=8B
> http://www.nxp.com/products/software-and-tools/hardware-
> development-tools/freedom-development-boards/qoriq-frdm-
> ls1012a-board:FRDM-LS1012A
>
> =E2=80=8BWhile neither of these run FreeBSD today, =E2=80=8B =E2=80=8BI h=
ave several examples of
> both in my office, You decide what that might mean.=E2=80=8B
>
> =E2=80=8BAll that said, I would like to add this: =E2=80=8B
>
> The efforts undertaken to get pfSense out of the pit of being years behin=
d
> FreeBSD -HEAD are finally paying off in two ways:
>
> - First, we can generate releases of pfSense that closely follow releases
> of FreeBSD, rather than the situation that had degraded to a point where
> pfSense was lagging as much as three years behind FreeBSD.
> - Second, being able to track -HEAD and the latest -RELEASE of FreeBSD in
> pfSense allows us to easily contribute back to FreeBSD (both ports and
> src).
>
> Both are important.
>
> =E2=80=8BJim
> =E2=80=8B
>
>
>
> > There are multiple boards listed at <https://wiki.freebsd.org/
> FreeBSD/arm>,
> > but the boards under the 'Well supported boards' headings appear only t=
o
> > have single interfaces (though I wouldn't claim an exhaustive search).
> >
> > I can see what appear to be nice multiple-interface boards at <
> > https://www.solid-run.com/marvell-armada-family/>, but they're listed
> > under 'unknown support'. I can see some notes on some Marvell boards at=
 <
> > https://wiki.freebsd.org/FreeBSDMarvell>, but these refer to FreeBSD 8.=
x
> > and 9-CURRENT, so are clearly not up-to-date.
> >
> > Searching the list archives, I find <https://lists.freebsd.org/pip
> > ermail/freebsd-arm/2015-February/010300.html> that at least some people
> > are using 11.0-CURRENT on an Armada/Marvell board, but (given that
> that's a
> > bug report) I'm not sure if that usage counts as ...Brave or not.
> >
> > There's clearly a lot of hardware possibilities here, but I surely can'=
t
> > be the first person to want such a device. Does anyone on this list hav=
e
> > any advice?
> >
> > Best wishes,
> >
> > Norman
> >
> >
> > --
> > Norman Gray : https://nxg.me.uk
> > SUPA School of Physics and Astronomy, University of Glasgow, UK
> > _______________________________________________
> > freebsd-arm@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-arm
> > To unsubscribe, send any mail to "freebsd-arm-unsubscribe@freebsd.org"
> >
> _______________________________________________
> freebsd-arm@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-arm
> To unsubscribe, send any mail to "freebsd-arm-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAKAfi7zn1H=M3iaL1ASR4w7LU=HBFnNmn1OxF=B5QbLdKygnXQ>