From owner-freebsd-stable@freebsd.org Tue Nov 24 14:00:20 2015 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 114FCA36B5D for ; Tue, 24 Nov 2015 14:00:20 +0000 (UTC) (envelope-from albert@acervin.com) Received: from mail-wm0-x229.google.com (mail-wm0-x229.google.com [IPv6:2a00:1450:400c:c09::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 97D6519B4 for ; Tue, 24 Nov 2015 14:00:19 +0000 (UTC) (envelope-from albert@acervin.com) Received: by wmuu63 with SMTP id u63so97638235wmu.0 for ; Tue, 24 Nov 2015 06:00:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acervin-com.20150623.gappssmtp.com; s=20150623; h=mime-version:date:message-id:subject:from:to:content-type; bh=N6WIHNLJAGcZnNFJRohes8afSc/05ipQEbBEUvoo7EE=; b=wpQFFHzIExj9o9sPjBr19g6dvXZkQnmmMAeM4nKarNtK6rdKvkxBSb3tm6Yi0Sx/Hx jiWSVqeuwE3JtFddxmlqOQqU281JKSOi57hVVGSpdyFg0vN7o7Z8RG8/IArvHzSIM7Ut UWAelOSL9QUkfkShlVmKsYPYOf55qeC1o4OdO4zO1lCHToyp9D3dRMJct5fSYP4UIqmx ZQgtPAeN9Ve8FosCslcNyISGRTHNX+fMTTqAYcRl7LhiTOn7V41WJwSjGe4wmExqVvi3 m6vZow+GpZ3mzvSZmNsRvK/pjlG3CSpt1EfO0ZajXjv3261SAHzFUUVK86BtmnmwZElB 68+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=N6WIHNLJAGcZnNFJRohes8afSc/05ipQEbBEUvoo7EE=; b=DaQQO/EfzMxWnhXykaaosSgchtZ7Dfl6kmgVOXbeEq2BHm2e1AznKVFPf9C4KwlsmQ VlX8cW0HbVuq8py2hnAemVjalHqx46b5aYMkSz9EX7F8AvCaTAaQugRDk/VJLXJruZxW oEmkiSSvRxwKVOBS3cPQFl1iXVYyHOLPSHqjfvhWxVm9qG/vCvqkOxODmzAYrrVq4WXR G2MkgZziMeREZkBlfkQKoFFueQKwgv5ry33tK+l5gIQc1qicEz64uOYzoZe9/gwME8f8 lvvEQEEboMnHdIc/G4wDAGVcbH0lAMCkeu8l8YeEfS2YENoHTVfD/Ob69peK3aDLrSVU HIUA== X-Gm-Message-State: ALoCoQl94eYr3iClxYnULsgVD3of56B8olmRABZX8IO1eVD2gVQzJBG7grrRfwck3A08W7RBXpTd MIME-Version: 1.0 X-Received: by 10.194.92.4 with SMTP id ci4mr43670833wjb.175.1448373617571; Tue, 24 Nov 2015 06:00:17 -0800 (PST) Received: by 10.28.195.195 with HTTP; Tue, 24 Nov 2015 06:00:17 -0800 (PST) X-Originating-IP: [80.239.191.4] Date: Tue, 24 Nov 2015 15:00:17 +0100 Message-ID: Subject: ZFS - poor performance with "large" directories From: Albert Cervin To: freebsd-stable@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Nov 2015 14:00:20 -0000 Hi all, Please feel free to direct me to a list that is more suitable. We are trying to set up a fileserver solution for a web application that we are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are written over CIFS with Samba running on the fileserver host. However, we are seeing en exponential decrease in performance to write to the file server when the number of files in the directory grows (when it goes up to ~6000 files it becomes unusable and the write time has gone from a fraction of a second to ten seconds). We ran the same setup on a Linux machine with an ext4 file system which did NOT suffer from this performance degradation. Our first reaction was to remove Samba from the equation. I ran a test where i tried to copy a folder with a large amount of files and then ran a test with the same folder as a zip. So, cp -r folder_with_lots_of_files copy_of_folder_with_lots_of_files gives an iostat output that looks like this for the zpool (zpool iostat frosting 1): pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- frosting 48.5G 299G 2 0 267K 8.56K frosting 48.5G 299G 401 0 50.2M 0 frosting 48.6G 299G 384 94 47.9M 7.79M frosting 48.6G 299G 471 0 58.9M 0 frosting 48.6G 299G 492 0 61.4M 0 frosting 48.6G 299G 393 0 49.0M 0 frosting 48.6G 299G 426 0 53.3M 0 frosting 48.6G 299G 421 147 52.5M 9.71M frosting 48.6G 299G 507 0 63.4M 0 frosting 48.6G 299G 376 0 47.0M 0 frosting 48.6G 299G 447 0 55.8M 0 frosting 48.6G 299G 433 13 54.2M 1.62M frosting 48.6G 299G 431 85 53.8M 6.95M frosting 48.6G 299G 288 0 36.1M 0 frosting 48.6G 299G 329 0 41.2M 0 frosting 48.6G 299G 340 0 42.4M 0 frosting 48.6G 299G 398 9 49.8M 1.14M frosting 48.6G 299G 324 126 40.4M 7.08M frosting 48.6G 299G 391 0 48.9M 0 frosting 48.6G 299G 261 0 32.5M 0 frosting 48.6G 299G 314 0 39.3M 0 frosting 48.6G 299G 317 0 39.6M 0 frosting 48.6G 299G 346 79 43.3M 6.36M Are these "holes" in write speed normal. Since this is the exact symptom we are getting when the network writes start to be slow. If I instead copy a large single file, I get this IO behavior: capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- frosting 50.1G 298G 7 0 953K 34.5K frosting 50.1G 298G 224 215 27.9M 26.8M frosting 50.2G 298G 224 364 27.8M 38.6M frosting 50.2G 298G 225 57 27.9M 7.23M frosting 50.3G 298G 173 477 21.5M 56.1M frosting 50.3G 298G 219 0 27.3M 0 frosting 50.3G 298G 265 353 33.0M 44.0M frosting 50.3G 298G 294 172 36.6M 18.3M frosting 50.3G 298G 237 436 29.4M 54.2M frosting 50.4G 298G 257 108 31.9M 9.69M frosting 50.4G 298G 211 382 26.1M 47.5M frosting 50.4G 298G 305 162 38.0M 16.4M frosting 50.4G 298G 253 369 31.5M 45.9M frosting 50.5G 297G 176 177 21.8M 18.0M frosting 50.5G 297G 197 167 24.6M 20.9M frosting 50.6G 297G 248 375 30.9M 42.8M frosting 50.6G 297G 322 605 39.9M 68.0M frosting 50.6G 297G 164 36 20.4M 1.57M frosting 50.6G 297G 259 96 32.2M 12.0M which looks more like what I would expect and is also similiar to the IO behavior we get if I copy the folder with many files on an ext4 file system. Any help or tips for getting this to work would be highly appreciated! Cheers, Albert Cervin