From owner-freebsd-hackers@FreeBSD.ORG Fri Mar 25 07:21:57 2005 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 493A216A4CE for ; Fri, 25 Mar 2005 07:21:57 +0000 (GMT) Received: from panther.cs.ucla.edu (Panther.CS.UCLA.EDU [131.179.128.25]) by mx1.FreeBSD.org (Postfix) with ESMTP id 130C943D53 for ; Fri, 25 Mar 2005 07:21:57 +0000 (GMT) (envelope-from yanyu@CS.UCLA.EDU) Received: from localhost (yanyu@localhost)j2P7Lsx09069 for ; Thu, 24 Mar 2005 23:21:54 -0800 (PST) Date: Thu, 24 Mar 2005 23:21:54 -0800 (PST) From: Yan Yu To: freebsd-hackers@freebsd.org In-Reply-To: <20050319.230230.35850068.imp@bsdimp.com> Message-ID: References: <20050318160528.GQ51688@smp500.sitetronics.com> <20050319080215.GX51688@smp500.sitetronics.com> <20050319.230230.35850068.imp@bsdimp.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Subject: a Q on measuring system performance. X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Mar 2005 07:21:57 -0000 Hello, all, I add some codes in various places relating to file operations inside the kernel, e.g., fdalloc(), fdused(), fdunused(), fdfree() etc. I am trying to measure the overhead added by these instrumentation code. my plan is: in my user space program, i have something like the following: -------------------------------------------- gettimeofday(&prev_time, NULL); for (i=0; i< 1000; i++) { fd = fopen("tmp", "r" ); if (fd == NULL) { break; } cnt ++; } gettimeofday(&cur_time, NULL); t_lapse= misc_tv_offset( &cur_time, &prev_time ); ---------------------------------------------------- I would run this for the unmodified kernel, and instrumented kernel. compare the t_lapse, my concern is that t_lapse includes context switch time when the user process is taken out of run queue. I also run "gprof" on the program, some related data is: % cumulative self self total time seconds seconds calls ms/call ms/call name 80.0 0.01 0.01 1000 0.01 0.01 __sys_open [3] 20.0 0.01 0.00 1000 0.00 0.00 __sfp [4] 0.0 0.01 0.00 1987 0.00 0.00 memcpy [6] 0.0 0.01 0.00 1000 0.00 0.00 __sflags [283] 0.0 0.01 0.00 1000 0.00 0.01 fopen [1] i am wonderinf should I better trust gprof instead? so 0.01 ms/call for related file operation is the result. or is there some other better way to achieve this? Many thanks in advance! yan