Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Oct 2008 13:37:44 +0100
From:      Bartosz Giza <gizmen@blurp.pl>
To:        Alexander Motin <mav@freebsd.org>
Cc:        freebsd-net@freebsd.org
Subject:   Re: two NIC on 2 core system (scheduling problem)
Message-ID:  <200810291337.44899.gizmen@blurp.pl>
In-Reply-To: <49083CBD.1000701@FreeBSD.org>
References:  <1225203780.00029971.1225190402@10.7.7.3> <200810290953.28237.gizmen@blurp.pl> <49083CBD.1000701@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Wednesday 29 of October 2008 11:36:45 Alexander Motin napisa=C5=82(a):
> Bartosz Giza wrote:
> > Tuesday 28 of October 2008 19:10:43 Alexander Motin napisa=C5=82(a):
> >> Bartosz Giza wrote:
> >>>> The CPU time you see there includes much more then just a card
> >>>> handling itself. It also includes CPU time of the most parts of
> >>>> network stack used to process received packet. So if you have NAT,
> >>>> big firewall, netgraph or any other CPU-hungry actions done with
> >>>> packets incoming via em0 you will see such results.
> >>>> Even more interesting is that if bge0 or fxp0 cards will require
> >>>> much CPU time to send packet, this time will also be accounted to
> >>>> em0 process.
> >
> > I have checked this and you are right. When i turned off ipfw; taskq
> > process started to use less cpu. But still what is strange why
> > processing from other cards are counted in em0 taskq ?
>
> What you mean by "processing from other cards"? em0 taskq counts all
> processing caused by packets incoming via em0 up to and including
> processing of their transmission by bge/fxp drivers. Same is about
> bge/fxp. If bge/fxp/em drivers would have separate transmission
> processes - you would see them, but they don't, so their CPU time
> accounted to the caller.

Ok now i think understand this. If packet enters via em0 all processing of=
=20
packet filters(and others) way up to sending packet via other interface is=
=20
counted to em0 taskq( even overhead of packet filter when packet leaves the=
=20
interface) ? Basicly overhead from passing packet to packet filter twice=20
(on in and out)

> > This is quite strange and in that
> > way em0 taskq process is using more cpu on one of the cores. So what i
> > think the best would be to have only em NICs because processing of the
> > packets would be splitted to those taskq processes is that right ?
>
> em0 processes packets in separate process named taskq, bge does it
> directly in interrupt handler process. There is no principal difference
> for you I think.

So now i am lost again. If packet filtering on bge card is counted to irq17=
:=20
bge0 process so i think it should use more cpu.
=46rom what you wrote there should be no difference  for me if card use tas=
q=20
or irq. Those processes do exactly the same thing? If that is true so why=20
there is so much difference in cpu usage:

  20 root       1 -68    -     0K     8K -      0 161:01 18.75% em0 taskq
   21 root       1 -68    -     0K     8K WAIT   1 100:10  5.47% irq17: bge0
   23 root       1 -68    -     0K     8K WAIT   0  75:31  2.98% irq16: fxp1

If what you wrote is true that overhead of incomming packet on bge0 should=
=20
be counted to irq17: bge0
So don't understand why there is so big cpu usage on em0. From what you are=
=20
saying irq17 and em0 taskq should have similar usage. Even more bge0 passes=
=20
about two times more traffic  than em0. I simply don't understand this.

> > Ok, good to know. But how is counted firewall overhead when i would
> > have only bge cards. They don't use taskq so i assume i would see this
> > as system usage correct ?
>
> You would see a lot of interrupt time in this case.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200810291337.44899.gizmen>