From owner-freebsd-stable@freebsd.org Sat Aug 12 15:50:36 2017 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AF5B9DDEA30 for ; Sat, 12 Aug 2017 15:50:36 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qk0-x241.google.com (mail-qk0-x241.google.com [IPv6:2607:f8b0:400d:c09::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 6FB4965B88 for ; Sat, 12 Aug 2017 15:50:36 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by mail-qk0-x241.google.com with SMTP id x77so6003770qka.4 for ; Sat, 12 Aug 2017 08:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kraus-haus-org.20150623.gappssmtp.com; s=20150623; h=subject:mime-version:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=MjHsvgiLSvpfuasu3J/hQLYf8RWllt/+/3QQNmkY7qM=; b=saBFQ9HrIFzogDEf3rH7kTR4rNWBvVkUgEYn032Uyu72w0egPUh0YN/CexTnhg9iHp 38Uj80hTjZFbryc5H9sdty4Ebnu/VAvXkGtAwmbMFkDnM8a+MrZ7Vz+utFyE2WwpkiYr vp8dwFBi8KGH8zuGNznD5RzCBXizVTdVtMw62+Z4xykj3UPNpZtVMm71h7Eg9L8sCUdG ehGkftp5zauJDEq6Gb9tudzzd04JiyYslDaEQI/qbNCGe/kCFZUko/7Mxl4S+mFU/ixQ JfvhCHPsrP08rVvMrC66uzcEPDFtNAnNL+oKAz6Xm3L1FsftIX5VB9UOmto+2wZiqQ+O qbEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:mime-version:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=MjHsvgiLSvpfuasu3J/hQLYf8RWllt/+/3QQNmkY7qM=; b=XMdyjOgA4dftJGB02GjqV++apUr/Gz+J0MVv+F5IUzrW3+xbJ6Ha6djIJttlkUP8Ki EWZvA8TbgIHz9KPQJEM5kXnNQx/oiBS9D5fufUY7vjHHIqC88SHNDlgK3SIzMam69flB walX4AYgjDVK6b6PMzXWMP7ifKNpaB5pkMSy6Ray9/BQ7TDZF4WD+AANNCjBwy+vtv+S ArRNujZA1+bO4J796P+b9oJVEDLTxGJcHpqi7okzYd0ljS5+tAgVOVhKnWPhpvbdJs91 NJk0YXvYT3v1w0awDr1ToUSUho0O/S7EyM59VHQU9qgRGq8Cuh4ObppB3DxcHqWznCoD Z/IQ== X-Gm-Message-State: AHYfb5jb5Q0aFhhjfVURzKOOJDAssZIZKHUT2ITVp/AeCzvzPYfBydo+ 4kbpdb9tFSKVkD1u1yUENw== X-Received: by 10.55.20.25 with SMTP id e25mr24944550qkh.75.1502553035460; Sat, 12 Aug 2017 08:50:35 -0700 (PDT) Received: from [192.168.2.65] (pool-74-109-188-83.albyny.fios.verizon.net. [74.109.188.83]) by smtp.gmail.com with ESMTPSA id r23sm2399878qtc.50.2017.08.12.08.50.33 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 12 Aug 2017 08:50:33 -0700 (PDT) Subject: Re: zfs listing and CPU Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Content-Type: text/plain; charset=us-ascii From: Paul Kraus In-Reply-To: Date: Sat, 12 Aug 2017 11:50:34 -0400 Cc: freebsd-fs@freebsd.org, freebsd-stable Content-Transfer-Encoding: quoted-printable Message-Id: References: To: "Eugene M. Zheganin" X-Mailer: Apple Mail (2.3124) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Aug 2017 15:50:36 -0000 > On Aug 11, 2017, at 2:28 AM, Eugene M. Zheganin = wrote: >=20 > Why does the zfs listing eat so much of the CPU ? > 47114 root 1 20 0 40432K 3840K db->db 4 0:05 26.84% = zfs > 47099 root 1 20 0 40432K 3840K zio->i 17 0:05 26.83% = zfs > 47106 root 1 20 0 40432K 3840K db->db 21 0:05 26.81% = zfs > 47150 root 1 20 0 40432K 3428K db->db 13 0:03 26.31% = zfs > 47141 root 1 20 0 40432K 3428K zio->i 28 0:03 26.31% = zfs > 47135 root 1 20 0 40432K 3312K g_wait 9 0:03 25.51% = zfs > This is from winter 2017 11-STABLE (r310734), one of the 'zfs'es is = cloning, and all the others are 'zfs list -t all'. I have like 25 gigs = of free RAM, do I have any chance of speeding this up using may be some = caching or some sysctl tuning ? We are using a simple ZFS web API that = may issue concurrent or sequential listing requests, so as you can see = they sometimes do stack. How many snapshots do you have ? I have only seen this behavior with = LOTS (not hundreds, but thousands) of snapshots. What does your `iostat -x 1` look like ? I expect that you are probably = saturating your drives with random I/O.