From owner-freebsd-stable@FreeBSD.ORG Fri Oct 1 19:43:34 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D0E2B106566B for ; Fri, 1 Oct 2010 19:43:34 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta07.emeryville.ca.mail.comcast.net (qmta07.emeryville.ca.mail.comcast.net [76.96.30.64]) by mx1.freebsd.org (Postfix) with ESMTP id B588F8FC0C for ; Fri, 1 Oct 2010 19:43:34 +0000 (UTC) Received: from omta12.emeryville.ca.mail.comcast.net ([76.96.30.44]) by qmta07.emeryville.ca.mail.comcast.net with comcast id DWsz1f0060x6nqcA7Xja16; Fri, 01 Oct 2010 19:43:34 +0000 Received: from koitsu.dyndns.org ([98.248.41.155]) by omta12.emeryville.ca.mail.comcast.net with comcast id DXjZ1f0033LrwQ28YXjZuy; Fri, 01 Oct 2010 19:43:33 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 18A819B418; Fri, 1 Oct 2010 12:43:33 -0700 (PDT) Date: Fri, 1 Oct 2010 12:43:33 -0700 From: Jeremy Chadwick To: Dan Langille Message-ID: <20101001194333.GA51297@icarus.home.lan> References: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-stable@freebsd.org Subject: Re: zfs send/receive: is this slow? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Oct 2010 19:43:34 -0000 On Fri, Oct 01, 2010 at 02:51:12PM -0400, Dan Langille wrote: > > On Wed, September 29, 2010 2:04 pm, Dan Langille wrote: > > $ zpool iostat 10 > > capacity operations bandwidth > > pool used avail read write read write > > ---------- ----- ----- ----- ----- ----- ----- > > storage 7.67T 5.02T 358 38 43.1M 1.96M > > storage 7.67T 5.02T 317 475 39.4M 30.9M > > storage 7.67T 5.02T 357 533 44.3M 34.4M > > storage 7.67T 5.02T 371 556 46.0M 35.8M > > storage 7.67T 5.02T 313 521 38.9M 28.7M > > storage 7.67T 5.02T 309 457 38.4M 30.4M > > storage 7.67T 5.02T 388 589 48.2M 37.8M > > storage 7.67T 5.02T 377 581 46.8M 36.5M > > storage 7.67T 5.02T 310 559 38.4M 30.4M > > storage 7.67T 5.02T 430 611 53.4M 41.3M > > Now that I'm using mbuffer: > > $ zpool iostat 10 > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > storage 9.96T 2.73T 2.01K 131 151M 6.72M > storage 9.96T 2.73T 615 515 76.3M 33.5M > storage 9.96T 2.73T 360 492 44.7M 33.7M > storage 9.96T 2.73T 388 554 48.3M 38.4M > storage 9.96T 2.73T 403 562 50.1M 39.6M > storage 9.96T 2.73T 313 468 38.9M 28.0M > storage 9.96T 2.73T 462 677 57.3M 22.4M > storage 9.96T 2.73T 383 581 47.5M 21.6M > storage 9.96T 2.72T 142 571 17.7M 15.4M > storage 9.96T 2.72T 80 598 10.0M 18.8M > storage 9.96T 2.72T 718 503 89.1M 13.6M > storage 9.96T 2.72T 594 517 73.8M 14.1M > storage 9.96T 2.72T 367 528 45.6M 15.1M > storage 9.96T 2.72T 338 520 41.9M 16.4M > storage 9.96T 2.72T 348 499 43.3M 21.5M > storage 9.96T 2.72T 398 553 49.4M 14.4M > storage 9.96T 2.72T 346 481 43.0M 6.78M > > If anything, it's slower. > > The above was without -s 128. The following used that setting: > > $ zpool iostat 10 > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > storage 9.78T 2.91T 1.98K 137 149M 6.92M > storage 9.78T 2.91T 761 577 94.4M 42.6M > storage 9.78T 2.91T 462 411 57.4M 24.6M > storage 9.78T 2.91T 492 497 61.1M 27.6M > storage 9.78T 2.91T 632 446 78.5M 22.5M > storage 9.78T 2.91T 554 414 68.7M 21.8M > storage 9.78T 2.91T 459 434 57.0M 31.4M > storage 9.78T 2.91T 398 570 49.4M 32.7M > storage 9.78T 2.91T 338 495 41.9M 26.5M > storage 9.78T 2.91T 358 526 44.5M 33.3M > storage 9.78T 2.91T 385 555 47.8M 39.8M > storage 9.78T 2.91T 271 453 33.6M 23.3M > storage 9.78T 2.91T 270 456 33.5M 28.8M For what it's worth, this mimics the behaviour I saw long ago when using flexbackup[1] (which used SSH) to back up numerous machines on our local gigE network. flexbackup strongly advocates use of mbuffer or afio to attempt to buffer I/O between source and destination. What I witnessed was I/O rates that were either identical or worse (most of the time, worse) when mbuffer was used (regardless of what I chose for -s and -m). I switched to rsnapshot (which uses rsync via SSH) for a lot of reasons which are outside of the scope of this topic. I don't care to get into a discussion about the I/O bottlenecks stock OpenSSH has (vs. one patched with the high-performance patches) either; the point is that mbuffer did absolutely nothing or made things worse. This[2] didn't impress me either. [1]: http://www.edwinh.org/flexbackup/ [2]: http://www.edwinh.org/flexbackup/faq.html#Common%20problems4 -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |