From owner-freebsd-fs@FreeBSD.ORG Sat Jan 1 18:36:43 2011 Return-Path: Delivered-To: freebsd-fs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8D648106566C; Sat, 1 Jan 2011 18:36:43 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id E2A588FC0C; Sat, 1 Jan 2011 18:36:42 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 241586B3B10; Sat, 1 Jan 2011 19:18:53 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 13.8405] X-CRM114-CacheID: sfid-20110101_19185_C62F655B X-CRM114-Status: Good ( pR: 13.8405 ) X-Spambayes-Classification: ham; 0.00 Message-ID: <4D1F7008.3050506@fsn.hu> Date: Sat, 01 Jan 2011 19:18:48 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Martin Matuska References: <4D0A09AF.3040005@FreeBSD.org> In-Reply-To: <4D0A09AF.3040005@FreeBSD.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@FreeBSD.org, freebsd-stable@FreeBSD.org Subject: Re: New ZFSv28 patchset for 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Jan 2011 18:36:43 -0000 On 12/16/2010 01:44 PM, Martin Matuska wrote: > Link to the patch: > > http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz > > I've used this: http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz on a server with amd64, 8 G RAM, acting as a file server on ftp/http/rsync, the content being read only mounted with nullfs in jails, and the daemons use sendfile (ftp and http). The effects can be seen here: http://people.fsn.hu/~bra/freebsd/20110101-zfsv28-fbsd/ the exact moment of the switch can be seen on zfs_mem-week.png, where the L2 ARC has been discarded. What I see: - increased CPU load - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased hard disk load (IOPS graph) Maybe I could accept the higher system load as normal, because there were a lot of things changed between v15 and v28 (but I was hoping if I use the same feature set, it will require less CPU), but dropping the L2ARC hit rate so radically seems to be a major issue somewhere. As you can see from the memory stats, I have enough kernel memory to hold the L2 headers, so the L2 devices got filled up to their maximum capacity. Any ideas on what could cause these? I haven't upgraded the pool version and nothing was changed in the pool or in the file system. Thanks, From owner-freebsd-fs@FreeBSD.ORG Sat Jan 1 19:09:33 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A9A0E106564A; Sat, 1 Jan 2011 19:09:33 +0000 (UTC) (envelope-from artemb@gmail.com) Received: from mail-ww0-f42.google.com (mail-ww0-f42.google.com [74.125.82.42]) by mx1.freebsd.org (Postfix) with ESMTP id DFD5F8FC15; Sat, 1 Jan 2011 19:09:32 +0000 (UTC) Received: by wwi17 with SMTP id 17so12843527wwi.1 for ; Sat, 01 Jan 2011 11:09:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; bh=K8i/jWVzO1r5x2IgYWNKM5hWXmxpDl7KRJTmSQMIPhM=; b=OL7gJgtgYQnCt86aUuCVQ4fUZl+lSzH1L8u7PwIAHk3iItaFDHadf2epnbnKsuqusH mzILAjLp9m5msX1nanlDXanuKxfBVdDat/MBwJvgc3UGux4kLE9XjJ6q9rCGlW/b0auN M8zQ4GZNUh20G027MDsB/4QC84y5yEBlbYofQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=iJC3ei/nOpKzH6Dwo7T9P8uzyzTU58iPhsnuSzzoBfXyBZCFXsz0MhqWFFoaMFv+Cf 2eAAKyDuxc7ZZ7BmujBXLPWumvkuuMBLVVyuidlRoN5yodr4jEBgCDM35YbirmdUCc93 Ad+S/SPn0YmM9AL0Or9udlFNxScS1FyZjmmVc= MIME-Version: 1.0 Received: by 10.227.137.203 with SMTP id x11mr10989261wbt.80.1293908971710; Sat, 01 Jan 2011 11:09:31 -0800 (PST) Sender: artemb@gmail.com Received: by 10.227.129.6 with HTTP; Sat, 1 Jan 2011 11:09:31 -0800 (PST) In-Reply-To: <4D1F7008.3050506@fsn.hu> References: <4D0A09AF.3040005@FreeBSD.org> <4D1F7008.3050506@fsn.hu> Date: Sat, 1 Jan 2011 11:09:31 -0800 X-Google-Sender-Auth: YVPJG4s-m-QcEYPXj1NRZyvNvRg Message-ID: From: Artem Belevich To: Attila Nagy Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: New ZFSv28 patchset for 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Jan 2011 19:09:33 -0000 On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote: > What I see: > - increased CPU load > - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased > hard disk load (IOPS graph) > ... > Any ideas on what could cause these? I haven't upgraded the pool version and > nothing was changed in the pool or in the file system. The fact that L2 ARC is full does not mean that it contains the right data. Initial L2ARC warm up happens at a much higher rate than the rate L2ARC is updated after it's been filled initially. Even accelerated warm-up took almost a day in your case. In order for L2ARC to warm up properly you may have to wait quite a bit longer. My guess is that it should slowly improve over the next few days as data goes through L2ARC and those bits that are hit more often take residence there. The larger your data set, the longer it will take for L2ARC to catch the right data. Do you have similar graphs from pre-patch system just after reboot? I suspect that it may show similarly abysmal L2ARC hit rates initially, too. --Artem From owner-freebsd-fs@FreeBSD.ORG Sat Jan 1 19:23:11 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C3EAE106564A; Sat, 1 Jan 2011 19:23:11 +0000 (UTC) (envelope-from bra@fsn.hu) Received: from people.fsn.hu (people.fsn.hu [195.228.252.137]) by mx1.freebsd.org (Postfix) with ESMTP id B8A648FC18; Sat, 1 Jan 2011 19:23:10 +0000 (UTC) Received: by people.fsn.hu (Postfix, from userid 1001) id 382B76B3E1A; Sat, 1 Jan 2011 20:23:09 +0100 (CET) X-Bogosity: Ham, tests=bogofilter, spamicity=0.001600, version=1.2.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MF-ACE0E1EA [pR: 13.8090] X-CRM114-CacheID: sfid-20110101_20230_F10AFF58 X-CRM114-Status: Good ( pR: 13.8090 ) X-Spambayes-Classification: ham; 0.00 Message-ID: <4D1F7F1C.9090106@fsn.hu> Date: Sat, 01 Jan 2011 20:23:08 +0100 From: Attila Nagy User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.23) Gecko/20090817 Thunderbird/2.0.0.23 Mnenhy/0.7.6.0 MIME-Version: 1.0 To: Artem Belevich References: <4D0A09AF.3040005@FreeBSD.org> <4D1F7008.3050506@fsn.hu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, freebsd-stable@freebsd.org Subject: Re: New ZFSv28 patchset for 8-STABLE X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Jan 2011 19:23:11 -0000 On 01/01/2011 08:09 PM, Artem Belevich wrote: > On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote: >> What I see: >> - increased CPU load >> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased >> hard disk load (IOPS graph) >> > ... >> Any ideas on what could cause these? I haven't upgraded the pool version and >> nothing was changed in the pool or in the file system. > The fact that L2 ARC is full does not mean that it contains the right > data. Initial L2ARC warm up happens at a much higher rate than the > rate L2ARC is updated after it's been filled initially. Even > accelerated warm-up took almost a day in your case. In order for L2ARC > to warm up properly you may have to wait quite a bit longer. My guess > is that it should slowly improve over the next few days as data goes > through L2ARC and those bits that are hit more often take residence > there. The larger your data set, the longer it will take for L2ARC to > catch the right data. > > Do you have similar graphs from pre-patch system just after reboot? I > suspect that it may show similarly abysmal L2ARC hit rates initially, > too. > > Sadly no, but I remember that I've seen increasing hit rates as the cache grew, that's what I wrote the email after one and a half days. Currently it's at the same level, when it was right after the reboot... We'll see after few days.