From owner-freebsd-fs@freebsd.org Wed Dec 30 14:00:41 2015 Return-Path: Delivered-To: freebsd-fs@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 49E4BA560E1 for ; Wed, 30 Dec 2015 14:00:41 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id EA2BE1AF4 for ; Wed, 30 Dec 2015 14:00:40 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:5dHjSxKI6Ruyolc8t9mcpTZWNBhigK39O0sv0rFitYgULvnxwZ3uMQTl6Ol3ixeRBMOAu6wC07KempujcFJDyK7JiGoFfp1IWk1NouQttCtkPvS4D1bmJuXhdS0wEZcKflZk+3amLRodQ56mNBXsq3G/pQQfBg/4fVIsYL+lRMiK14ye7KObxd76W01wnj2zYLd/fl2djD76kY0ou7ZkMbs70RDTo3FFKKx8zGJsIk+PzV6nvp/jtM0rzyMFnfMs89UIXaiyQaMjBeheADk4NHsd/sDntRDfCwCI4y1PfH8Rl09yAgPGpDTzVZT1vy6y4vB40SKZOcDzZa0zVimv679rDhTh3nRUfwUl+X3a35QjxJlQpwis8kRy X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CFCwC64oNW/61jaINehDhHiFO2QIYPAoFQEAEBAQEBAQEBgQmCLYIHAQEBAwEjBFIFCwIBCBgCAg0ZAgJXAgSIOgitaZEHAQEBAQEBBAEBAQEBAR2BAYEuhCeEf4dzgUkFjjCIVosWhBcWjQmKR4NxAjkrhCgghDeBCAEBAQ X-IronPort-AV: E=Sophos;i="5.20,500,1444708800"; d="scan'208";a="258858112" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 30 Dec 2015 09:00:33 -0500 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 4A5CA15F55D; Wed, 30 Dec 2015 09:00:33 -0500 (EST) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id A3FLOPgo_-Rh; Wed, 30 Dec 2015 09:00:32 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id A111115F565; Wed, 30 Dec 2015 09:00:32 -0500 (EST) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id oE1iBT2p6AiW; Wed, 30 Dec 2015 09:00:32 -0500 (EST) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 8320D15F55D; Wed, 30 Dec 2015 09:00:32 -0500 (EST) Date: Wed, 30 Dec 2015 09:00:32 -0500 (EST) From: Rick Macklem To: Niels de Vos Cc: gluster-devel@gluster.org, freebsd-fs Message-ID: <923007690.145828058.1451484032304.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20151230103152.GS13942@ndevos-x240.usersys.redhat.com> References: <571237035.145690509.1451437960464.JavaMail.zimbra@uoguelph.ca> <20151230103152.GS13942@ndevos-x240.usersys.redhat.com> Subject: Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF43 (Win)/8.0.9_GA_6191) Thread-Topic: FreeBSD port of GlusterFS racks up a lot of CPU usage Thread-Index: RUJ47Q3BSevTS1ahT09HvnlYRojsfg== X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Dec 2015 14:00:41 -0000 Niels de Vos wrote: > On Tue, Dec 29, 2015 at 08:12:40PM -0500, Rick Macklem wrote: > > Hi, > > > > I'm been playing with the FreeBSD port of GlusterFS and it seems > > to be working ok. I do notice that the daemons use a lot of CPU, > > even when there is nothing to do (no volumes started, etc). > > When I ktrace the daemon, I see a small number of nanosleep() and > > select() syscalls and lots of poll() syscalls (close to 1000/sec). > > > > Looking at libglusterfs/src/event-poll.c, I find: > > ret = poll(ufds, size, 1); > > in a loop. The only thing the code seems to do when poll() times > > out is a call to event_dispatch_poll_resize(). > > > > So, is it necessary to call event_dispatch_poll_resize() 1000 times > > per second? > > Or is there a way to make event_dispatch_poll_resize() return quickly > > when there is nothing to do? > > I do not think this is critical. A longer timeout should be well > acceptable. > > > I'm guessing that Linux uses the event-epoll stuff instead of event-poll, > > so it wouldn't exhibit this. Is that correct? > > Well, both. most (if not all) Linux builds will use event-poll. But, > that calls epoll_wait() with a timeout of 1 millisecond as well. > Actually, when I look at the 3.7.6 sources in libglusterfs/src/event-epoll.c I only find one epoll_wait() at line#668: ret = epoll_wait (event_pool->fd, &event, 1, -1); so the timeout never happens in this code. (It does have code after the call to handle the timeout case.) All it seems to do (if it were to timeout) is adjust the # of threads in the event-epoll case, so my hunch is that a timeout isn't needed? For the event-poll.c case, it calls event_dispatch_poll_size() which looks like it might add new file descriptors, so someone more familiar with this code would need to decide if the timeout can be disabled (my hunch is no, but I'm not familiar with the code). > > Thanks for any information on this, rick > > ps: I am tempted to just crank the timeout of 1msec up to 10 or 20msec. > > Yes, that is probably what I would do too. And have both poll functions > use the same timeout, have it defined in libglusterfs/src/event.h. We > could make it a configurable option too, but I do not think it is very > useful to have. > > Could you file a bug and/or send a patch for this? > I will try bumping the timeout up in event-poll.c and if it seems to reduce CPU usage and not cause any obvious grief, I will file a bug report. Thanks for your help with this, rick > Thanks, > Niels >