Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 31 Aug 2009 18:35:47 -0700
From:      Kip Macy <kmacy@freebsd.org>
To:        fabient@freebsd.org
Cc:        freebsd-current@freebsd.org
Subject:   Re: Forwarding benchmark
Message-ID:  <3c1674c90908311835i7f70e6d8mbfa8ce619e7ff911@mail.gmail.com>
In-Reply-To: <D0E9834C-59EB-48B2-A804-6947DA75C7D6@netasq.com>
References:  <D0E9834C-59EB-48B2-A804-6947DA75C7D6@netasq.com>

next in thread | previous in thread | raw e-mail | index | archive | help
We're not going to see much more 700kpps on forwarding workloads until
we do something about the rtentry locking. I had some interesting
ideas I was exploring, but I don't have the luxury of side projects
right now.

em(9)'s transmit performance has been substantially improved in 8 by
using a buf_ring instead of IFQ so I assume that you're entirely gated
by rx performance. Jeff did some work in that area to reduce the
per-packet overhead of dequeue and to do some NAPI-like opportunistic
polling using a variant of the taskqueue API.

It won't give you any idea about latency breakdown, but it would be
useful for a general time breakdown to look at unhalted core cycles in
PMC.

Good Luck,
Kip



On Fri, Aug 21, 2009 at 02:25, Fabien Thomas<fabien.thomas@netasq.com> wrot=
e:
> =A0 =A0 =A0 =A0Hi all,
>
> Just a quick benchmark on 8.0 Beta2+ (18/08) show no regression vs 7.2.
>
> Result in FPS for 64bytes frame using Breakingpoint Elite
>
> Breakingpoint P1 =3D=3D=3D DUT =3D=3D=3D Breakingpoint P2
>
> Stream1 : P1 -> P2
> Stream2: P2 -> P1
>
> GENERIC kernel + netisr.direct
>
> 4.11 : 236 (with 1 stream down for unknown reason)
> 6.3 =A0 : 248
> 7.2 =A0 : 350
> 8.0b : 352
>
> POLLING kernel + netisr.direct
>
> 4.11 : 526
> 6.3 =A0 : 246
> 7.2 =A0 : 230
> 8.0b : 330
>
> Note that the perf grow a little bit from version to version but 4.11 wit=
h
> polling is always a lot better.
>
> There is a lot a more in depth testing to do (HW flow tag, 10gb, lot of
> interface, latency ...) but it give a rough idea of the perf in the
> forwarding area.
>
> Regards,
> Fabien
>
> dmesg:
>
> CPU: Intel(R) Pentium(R) D CPU 2.80GHz (2793.02-MHz 686-class CPU)
> =A0Origin =3D "GenuineIntel" =A0Id =3D 0xf47 =A0Stepping =3D 7
> =A0Features=3D0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR=
,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
> =A0Features2=3D0x641d<SSE3,DTES64,MON,DS_CPL,CNXT-ID,CX16,xTPR>
> =A0AMD Features=3D0x20100000<NX,LM>
> =A0AMD Features2=3D0x1<LAHF>
> =A0TSC: P-state invariant
> real memory =A0=3D 1073741824 (1024 MB)
> avail memory =3D 1035210752 (987 MB)
> ACPI APIC Table: <PTLTD =A0 =A0 =A0 =A0 =A0APIC =A0>
> FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
> FreeBSD/SMP: 1 package(s) x 2 core(s)
> =A0cpu0 (BSP): APIC ID: =A00
> =A0cpu1 (AP): APIC ID: =A01
> ...
> em8: <Intel(R) PRO/1000 Network Connection 6.9.14> port 0x7000-0x701f mem
> 0xed700000-0xed71ffff irq 18 at device 0.0 on pci6
> em8: Using MSI interrupt
> em8: [FILTER]
> em8: Ethernet address: 00:30:48:5c:40:82
> pcib7: <ACPI PCI-PCI bridge> irq 19 at device 28.3 on pci0
> pci8: <ACPI PCI bus> on pcib7
> em9: <Intel(R) PRO/1000 Network Connection 6.9.14> port 0x8000-0x801f mem
> 0xed800000-0xed81ffff irq 19 at device 0.0 on pci8
> em9: Using MSI interrupt
> em9: [FILTER]
> em9: Ethernet address: 00:30:48:5c:40:83
>
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org=
"
>



--=20
When harsh accusations depart too far from the truth, they leave
bitter consequences.
--Tacitus



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3c1674c90908311835i7f70e6d8mbfa8ce619e7ff911>