Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 31 Jan 2017 13:53:15 -0400
From:      Jordan Caraballo <jordancaraballo87@gmail.com>
To:        Slawa Olhovchenkov <slw@zxy.spb.ru>
Cc:        freebsd-net@freebsd.org
Subject:   Re: Disappointing packets-per-second performance results on a Dell, PE R530
Message-ID:  <ebb04a3e-bcde-6d50-af63-348e8d06fcba@gmail.com>
In-Reply-To: <20170103174627.GW37118@zxy.spb.ru>
References:  <8f637e2e-cd59-dc65-8476-30989bea516b@gmail.com> <20170103174627.GW37118@zxy.spb.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
This are the most recent stats. No advances so far. The system has 
-Current right now.

Any help or feedback would be appreciated.

Hardware Configuration:
Dell PowerEdge R530 with 2 Intel(R) Xeon(R) E5­2695 CPU's, 18 cores per 
cpu. Equipped with a Chelsio T-580-CR dual port in an 8x slot.

BIOS tweaks:
Hyperthreading (or Logical Processors) is turned off.

loader.conf
# Chelsio Modules
t4fw_cfg_load="YES"
t5fw_cfg_load="YES"
if_cxgbe_load="YES"

rc.conf
# Gateway Configuration
ifconfig_cxl0="inet 172.16.1.1/24"
ifconfig_cxl1="inet 172.16.2.1/24"
gateway_enable="YES"

Last Results:
packets errs idrops bytes packets errs bytes colls drops
2.7M 0 2.0M 1.4G 696k 0 368M 0 0
2.7M 0 2.0M 1.4G 686k 0 363M 0 0
2.6M 0 2.0M 1.4G 668k 0 353M 0 0
2.7M 0 2.0M 1.4G 661k 0 350M 0 0
2.8M 0 2.1M 1.5G 697k 0 369M 0 0
2.8M 0 2.1M 1.4G 684k 0 361M 0 0
2.7M 0 2.1M 1.4G 674k 0 356M 0 0

root@router1:~ # vmstat -i

interrupt total rate
irq9: acpi0 73 0
irq18: ehci0 ehci1 1155973 3
cpu0:timer 3551157 10
cpu29:timer 9303048 27
cpu9:timer 71693455 207
cpu16:timer 9798380 28
cpu18:timer 9287094 27
cpu26:timer 9342495 27
cpu20:timer 9145888 26
cpu8:timer 9791228 28
cpu22:timer 9288116 27
cpu35:timer 9376578 27
cpu30:timer 9396294 27
cpu23:timer 9248760 27
cpu10:timer 9756455 28
cpu25:timer 9300202 27
cpu27:timer 9227291 27
cpu14:timer 10083548 29
cpu28:timer 9325684 27
cpu11:timer 9906405 29
cpu34:timer 9419170 27
cpu31:timer 9392089 27
cpu33:timer 9350540 27
cpu15:timer 9804551 28
cpu32:timer 9413182 27
cpu19:timer 9231505 27
cpu12:timer 9813506 28
cpu13:timer 10872130 31
cpu4:timer 9920237 29
cpu2:timer 9786498 28
cpu3:timer 9896011 29
cpu5:timer 9890207 29
cpu6:timer 9737869 28
cpu7:timer 9790119 28
cpu1:timer 9847913 28
cpu21:timer 9192561 27
cpu24:timer 9300259 27
cpu17:timer 9786186 28
irq264: mfi0 151818 0
irq266: bge0 30466 0
irq272: t5nex0:evt 4 0
Total 402604945 1161

top -PHS
last pid: 18557; load averages: 2.58, 1.90, 0.95 up 4+00:39:54 18:30:46
231 processes: 40 running, 126 sleeping, 65 waiting
CPU 0: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 1: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 2: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 3: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 4: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 5: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 6: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle
CPU 7: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 8: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 9: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 10: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 11: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 12: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 13: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 14: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 15: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 16: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 17: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 18: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 19: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 20: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 21: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 22: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 23: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 24: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 25: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 26: 0.0% user, 0.0% nice, 0.0% system, 59.6% interrupt, 40.4% idle
CPU 27: 0.0% user, 0.0% nice, 0.0% system, 96.3% interrupt, 3.7% idle
CPU 28: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 29: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 30: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 31: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 32: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 33: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
CPU 34: 0.0% user, 0.0% nice, 0.0% system, 100% interrupt, 0.0% idle
CPU 35: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 15M Active, 224M Inact, 1544M Wired, 393M Buf, 29G Free
Swap: 3881M Total, 3881M Free

pmcstat -R sample.out -G - | head
@ CPU_CLK_UNHALTED_CORE [159 samples]

39.62%  [63]       acpi_cpu_idle_mwait @ /boot/kernel/kernel
  100.0%  [63]        acpi_cpu_idle
   100.0%  [63]         cpu_idle_acpi
    100.0%  [63]          cpu_idle
     100.0%  [63]           sched_idletd
      100.0%  [63]            fork_exit

17.61%  [28]       cpu_idle @ /boot/kernel/kernel


root@router1:~ # pmcstat -R sample0.out -G - | head
@ CPU_CLK_UNHALTED_CORE [750 samples]

31.60%  [237]      acpi_cpu_idle_mwait @ /boot/kernel/kernel
  100.0%  [237]       acpi_cpu_idle
   100.0%  [237]        cpu_idle_acpi
    100.0%  [237]         cpu_idle
     100.0%  [237]          sched_idletd
      100.0%  [237]           fork_exit

10.67%  [80]       cpu_idle @ /boot/kernel/kernel

On 03/01/17 13:46, Slawa Olhovchenkov wrote:
> On Tue, Jan 03, 2017 at 12:35:42PM -0400, Jordan Caraballo wrote:
>
>> We recently tested a Dell R530 with a Chelsio T580 card, under FreeBSD 10.3, 11.0, -STABLE and -CURRENT, and Centos 7.
>>
>> Based on our research, including netmap-fwd and with the routing improvements project (https://wiki.freebsd.org/ProjectsRoutingProposal),
>> we hoped for packets-per-second (pps) in the 5+ million range, or even higher.
>>
>> Based on prior testing (http://marc.info/?t=140604252400002&r=1&w=2), we expected 3-4 Million to be easily obtainable.
>>
>> Unfortunately, our current results top out at no more than 1.5 M (64 bytes length packets) with FreeBSD, and
>> surprisingly around 3.2 M (128 bytes length packets) with Centos 7, and we are at a loss as to why.
>>
>> Server Description:
>> Dell PowerEdge R530 with 2 Intel(R) Xeon(R) E5­2695 CPU's, 18 cores per
>> cpu. Equipped with a Chelsio T-580-CR dual port in an 8x slot.
>>
>> ** Can this be a lack in support issue related to the R530's hardware? **
>>
>> Any help appreciated!
> What hardware configuration?
> What BIOS setting?
> What loader.conf/sysctl.conf setting?
> What `vmstat -i`?
> What `top -PHS`?
> what
> ====
> pmcstat -S CPU_CLK_UNHALTED_CORE -l 10 -O sample.out
> pmcstat -R sample.out -G out.txt
> pmcstat -c 0 -S CPU_CLK_UNHALTED_CORE -l 10 -O sample0.out
> pmcstat -R sample0.out -G out0.txt
> ====



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ebb04a3e-bcde-6d50-af63-348e8d06fcba>