Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Oct 2010 12:43:33 -0700
From:      Jeremy Chadwick <freebsd@jdc.parodius.com>
To:        Dan Langille <dan@langille.org>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: zfs send/receive: is this slow?
Message-ID:  <20101001194333.GA51297@icarus.home.lan>
In-Reply-To: <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org>
References:  <a263c3beaeb0fa3acd82650775e31ee3.squirrel@nyi.unixathome.org> <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Oct 01, 2010 at 02:51:12PM -0400, Dan Langille wrote:
> 
> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
> > $ zpool iostat 10
> >                capacity     operations    bandwidth
> > pool         used  avail   read  write   read  write
> > ----------  -----  -----  -----  -----  -----  -----
> > storage     7.67T  5.02T    358     38  43.1M  1.96M
> > storage     7.67T  5.02T    317    475  39.4M  30.9M
> > storage     7.67T  5.02T    357    533  44.3M  34.4M
> > storage     7.67T  5.02T    371    556  46.0M  35.8M
> > storage     7.67T  5.02T    313    521  38.9M  28.7M
> > storage     7.67T  5.02T    309    457  38.4M  30.4M
> > storage     7.67T  5.02T    388    589  48.2M  37.8M
> > storage     7.67T  5.02T    377    581  46.8M  36.5M
> > storage     7.67T  5.02T    310    559  38.4M  30.4M
> > storage     7.67T  5.02T    430    611  53.4M  41.3M
> 
> Now that I'm using mbuffer:
> 
> $ zpool iostat 10
>                capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> storage     9.96T  2.73T  2.01K    131   151M  6.72M
> storage     9.96T  2.73T    615    515  76.3M  33.5M
> storage     9.96T  2.73T    360    492  44.7M  33.7M
> storage     9.96T  2.73T    388    554  48.3M  38.4M
> storage     9.96T  2.73T    403    562  50.1M  39.6M
> storage     9.96T  2.73T    313    468  38.9M  28.0M
> storage     9.96T  2.73T    462    677  57.3M  22.4M
> storage     9.96T  2.73T    383    581  47.5M  21.6M
> storage     9.96T  2.72T    142    571  17.7M  15.4M
> storage     9.96T  2.72T     80    598  10.0M  18.8M
> storage     9.96T  2.72T    718    503  89.1M  13.6M
> storage     9.96T  2.72T    594    517  73.8M  14.1M
> storage     9.96T  2.72T    367    528  45.6M  15.1M
> storage     9.96T  2.72T    338    520  41.9M  16.4M
> storage     9.96T  2.72T    348    499  43.3M  21.5M
> storage     9.96T  2.72T    398    553  49.4M  14.4M
> storage     9.96T  2.72T    346    481  43.0M  6.78M
> 
> If anything, it's slower.
> 
> The above was without -s 128.  The following used that setting:
> 
>  $ zpool iostat 10
>                capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> storage     9.78T  2.91T  1.98K    137   149M  6.92M
> storage     9.78T  2.91T    761    577  94.4M  42.6M
> storage     9.78T  2.91T    462    411  57.4M  24.6M
> storage     9.78T  2.91T    492    497  61.1M  27.6M
> storage     9.78T  2.91T    632    446  78.5M  22.5M
> storage     9.78T  2.91T    554    414  68.7M  21.8M
> storage     9.78T  2.91T    459    434  57.0M  31.4M
> storage     9.78T  2.91T    398    570  49.4M  32.7M
> storage     9.78T  2.91T    338    495  41.9M  26.5M
> storage     9.78T  2.91T    358    526  44.5M  33.3M
> storage     9.78T  2.91T    385    555  47.8M  39.8M
> storage     9.78T  2.91T    271    453  33.6M  23.3M
> storage     9.78T  2.91T    270    456  33.5M  28.8M

For what it's worth, this mimics the behaviour I saw long ago when using
flexbackup[1] (which used SSH) to back up numerous machines on our local
gigE network.  flexbackup strongly advocates use of mbuffer or afio to
attempt to buffer I/O between source and destination.  What I witnessed
was I/O rates that were either identical or worse (most of the time,
worse) when mbuffer was used (regardless of what I chose for -s and -m).

I switched to rsnapshot (which uses rsync via SSH) for a lot of reasons
which are outside of the scope of this topic.  I don't care to get into
a discussion about the I/O bottlenecks stock OpenSSH has (vs. one
patched with the high-performance patches) either; the point is that
mbuffer did absolutely nothing or made things worse.  This[2] didn't
impress me either.

[1]: http://www.edwinh.org/flexbackup/
[2]: http://www.edwinh.org/flexbackup/faq.html#Common%20problems4

-- 
| Jeremy Chadwick                                   jdc@parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20101001194333.GA51297>