Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 25 Nov 2008 15:09:19 -0500
From:      "Adrian Chadd" <adrian@freebsd.org>
To:        vadim_nuclight@mail.ru
Cc:        freebsd-performance@freebsd.org
Subject:   Re: hwpmc granularity and 6.4 network performance
Message-ID:  <d763ac660811251209l7aa50960y8feff1845f90944f@mail.gmail.com>
In-Reply-To: <slrngil402.2di4.vadim_nuclight@server.filona.x88.info>
References:  <slrngil402.2di4.vadim_nuclight@server.filona.x88.info>

next in thread | previous in thread | raw e-mail | index | archive | help
A few things!

* Since you've changed two things - hwpmc _AND_ the kernel version -
you can't easily conclude which one (if any!) has any influence on
Giant showing up in your top output. I suggest recompiling without
hwpmc and seeing if the behaviour changes.

* The gprof utility expects something resembling "time" for the
sampling data, but pmcstat doesn't record time, it records "events".
The counts you see in gprof are "events", so change "seconds" to
"events" in your reading of the gprof output.

* I don't know if the backported pmc to 6.4 handles stack call graphs
or not. Easy way to check - pmcstat -R sample.out | more ; see if you
just see "sample" lines or "sample" and "callgraph" lines.

* I bet that ipfw_chk is a big enough hint. How big is your ipfw ruleset? :)



Adrian

2008/11/24 Vadim Goncharov <vadim_nuclight@mail.ru>:
> Hi!
>
> I've recently perfromed upgrade of busy production router from 6.2 to 6.4-PRE.
> I have added two lines to my kernel config and did usual make buildkernel:
>
> device          hwpmc                   # Driver (also a loadable module)
> options         HWPMC_HOOKS             # Other necessary kernel hooks
>
> After rebooting with new world and kernel, I've noticed that CPU load has
> slightly increased (not measured, it is different every second anyway, as
> users do not genereate steady traffic), and in top -S 'swi1: net' became
> often in state *Giant, but it not used to do so on 6.2, while kernel config
> did not changed much, and device polling is still used. What could happen
> to this?
>
> Another question, I've read "Sixty second HWPMC howto" and tried to find out
> what exactly eats my CPU. BTW, that instruction did not apply exactly on my
> machine, this is what I did:
>
> # cd /tmp
> # pmcstat -S instructions -O /tmp/sample.out
> # pmcstat -R /tmp/sample.out -k /boot/kernel/kernel -g
> # gprof /boot/kernel/kernel p4-instr-retired/kernel.gmon > kernel.gmon.result
>
> Now in file kernel.gmon.result I see the following:
>
> granularity: each sample hit covers 4 byte(s) for 0.00% of 692213.00 seconds
>
>                                  called/total       parents
> index  %time    self descendents  called+self    name           index
>                                  called/total       children
>
>                                                     <spontaneous>
> [1]     31.7 219129.00        0.00                 ipfw_chk [1]
>
> -----------------------------------------------
>
>
> [...]
>
> Why does it show 0.00 in this column ?
>
> On next listing, flat profile, I see more readable listing, but columns are
> empty again:
>
> granularity: each sample hit covers 4 byte(s) for 0.00% of 692213.00 seconds
>
>  %   cumulative   self              self     total
>  time   seconds   seconds    calls  ms/call  ms/call  name
>  31.7  219129.00 219129.00                             ipfw_chk [1]
>  10.4  291179.00 72050.00                             bcmp [2]
>  6.1  333726.00 42547.00                             rn_match [3]
>  2.7  352177.00 18451.00                             generic_bzero [4]
>  2.4  368960.00 16783.00                             strncmp [5]
>
> OK, I can conclude from this that I should optimize my ipfw ruleset, but
> that's all. I know from sources that ipfw_chk() is a big function with a
> bunch of 'case's in a large 'switch'. I want to know which parts of that
> switch are executed more often. It says in listing that granularity is
> 4 bytes, I assume that it has a sample for each of 4-byte chunks of binary
> code, so that it must have such information. My kernel is compiled with:
>
> makeoptions     DEBUG=-g
>
> so kgdb does know where are instructions for each line of source code.
> How can I obtain this info from profiling? It also would be useful to know
> which places do calls to that bcmp() and rn_match().
>
> --
> WBR, Vadim Goncharov. ICQ#166852181       mailto:vadim_nuclight@mail.ru
> [Moderator of RU.ANTI-ECOLOGY][FreeBSD][http://antigreen.org][LJ:/nuclight]
>
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d763ac660811251209l7aa50960y8feff1845f90944f>