Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 21 Mar 2006 13:42:35 -0500
From:      "Gary Thorpe" <gthorpe@myrealbox.com>
To:        g_jin@ibl.gov
Cc:        freebsd-performance@freebsd.org, oxy@field.hu
Subject:   Re: packet drop with intel gigabit / marwell gigabit
Message-ID:  <1142966555.c7f9603cgthorpe@myrealbox.com>

next in thread | raw e-mail | index | archive | help
Jin Guojun [VFFS] wrote:
> OxY wrote:
>=20
>>
>> ----- Original Message ----- From: "Jin Guojun [VFFS]" <g_jin@lbl.gov>
>> To: "OxY" <oxy@field.hu>
>> Cc: <freebsd-performance@freebsd.org>
>> Sent: Monday, March 20, 2006 4:05 AM
>> Subject: Re: packet drop with intel gigabit / marwell gigabit
>>
>>>>> ....
>>>>> First let's clear the notation -- Is 30MB/s (MBytes/s) =3D 240Mb/s=20
>>>>> (Mbit/s) or MB/s means Mbits/s
>>>>> If  MB/s is MBytes/s and you also write this amount data to a disk,=
=20
>>>>> plus other traffic on fxp0 to disk too,
>>>>> then your problem may be bonded by memory bandwidth because CPU=20
>>>>> utilization is low:
>>>>>    (240 + 24~32) x 2 is about 535 Mbit/s (some chipset/motherboard=20
>>>>> has low memory BW for AMD)
>>>>> If this is true, then this no thing you can tune. What does the=20
>>>>> chipset (Motherboard) this machine have?
>>>>
>>>>
>>>>
>>>>
>>>> 30MB/s is Megabytes/sec, currently i have 18-20MB/s peak and 15MB/s=20
>>>> avg.
>>>> it's not 535Mbit/s, because i only download it to my machine, no=20
>>>> upload.
>>>> disks are different from apache disks, these disks have own=20
>>>> controller in one pci slot.
>>>> the packet drop is 5-7% at 200Mbit iperf test, 100Mbit drop is=20
>>>> around zero.
>>>> i have <ASUS A7V8X> on motherboard which has VIA KT400 northbridge
>>>> http://uk.asus.com/products4.aspx?modelmenu=3D2&model=3D226&l1=3D3&l2=
=3D13&l3=3D62=20
>>>>
>>>
>>>
>>>
>>> Yes, this is one of problem chipset. I bought one about 3 years ago.
>>> After one day testing, I returned it for changing a A7V600 (VIA KT600=
=20
>>> chipset),
>>> which is 30% more memory bandwidth than KT400. A7V600 can only=20
>>> receive max
>>> 604 Mb/s TCP, so You can imagine what the KT400 can do :-)
>>> I do not have a record (because it is too bad), but taking minimum=20
>>> 25% off,
>>> it probably about 420-430 Mb/s (50MB/s). Now you can do the math when=
=20
>>> the
>>> machine also writing data to a disk (assume disk a fast enough). I=20
>>> would expect
>>> 2/3 of 430 Mb/s, which is about 280~290 Mb/s (35 MB/s).
>>> If you experiment these numbers, you are at there. No improvement you=
=20
>>> can make
>>> further.
>>
>>
>>
>> i have doubts, because when i have 3-4MB/s traffic on fxp0 then em0 peak
>> is 18MB/s, but when fxp0 is almost idle, have 500kB/s traffic, then=20
>> em0 can only
>> do 20MB/s..
>=20
>=20
> Since you did not get anything better than 35MB/s, then, what is your=20
> doubt --
>    the maximum I/O A7V8X can do?
>=20
> The 35 MB/s is the theoretical ceiling based on 2100+ CPU. 2000+ will be=
=20
> slower.
> In previous email, you mentioned you had 240 Mb/s (30 MB/s) on em0 with=
=20
> some
> traffic on fxp0, it is pretty much close to your hardware physical=20
> limitation.

I thought all modern NICs used bus mastering DMA i.e. not dependent on CPU =
for data transfers? In addition, the available memory bandwidth for moder=
n CPU's/systems is well over 100 MB/s. DDR400 is 400 MB/s (megabytes per =
second). Bus mastering DMA will be limited by the memory or IO bus bandwi=
dth primarily. The system bus bandwidth cannot be the problem either: his=
 motherboard's lowest front side bus speed is 200 MHz * 64-bit width =3D =
1.6 GB/s (gigabytes per second) of peak system bus bandwidth.

The limitation of 32-bit/33 MHz PCI is 133 MB/s (again, megabytes not bits)=
 maximum. Gigabit ethernet requires 125 MB/s (not Mb/s) maximum bandwidth=
: 32/33 PCI has enough for bursts but bus contention with disk bandwidth =
will reduce the sustained bandwidth. The motherboard in question has an o=
ption for integrated gigabit LAN which may bypass the shared PCI bus alto=
gether (or it might not).

Anyway, the original problem was packet loss and not bandwidth. His CPU is =
mostly idle, so that cannot be the reason for packet loss. If 32/33 PCI c=
an sustain 133 MB/s then it cannot be a problem because he needs=20
less than this. If it cannot, then packets will arrive too fast from the ne=
twork before they can be moved from the board into memory and would cause=
 the packet loss. Otherwise, his system is capable of achieving what he w=
ants in theory and the suboptimal behavior may be due to hardware (e.g. P=
CI bus bandwidth not being able to reach 133 MB/s sustained) or software =
limitations (e.g inefficient operating system).

> Forget drop in this figure, because this demonstrated how much hardware=
=20
> can do,
> rather than lossless transmission.
> Once you have determined the ceiling, you need to keep a margin for=20
> lossless Tx.
> for other overhead, such as context switch, etc.
> 20 MB/s is not good enough for this board, you may expect 28-30 MB/s with
> fine tuning. Unless you will be happy with 28 MB/s, it does not make=20
> sense to
> waste time to try to bump I/O above 30 MB/s for your application if you=
=20
> have
> another motherboard.
> Again, this motherboard is designed for entertainment boxes not for netwo=
rk
> I/O based applications.
>=20
>    -Jin
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to 
> "freebsd-performance-unsubscribe@freebsd.org"
> 
> 













Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1142966555.c7f9603cgthorpe>