Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 03 Oct 2010 21:11:04 -0400
From:      Dan Langille <dan@langille.org>
To:        Artem Belevich <fbsdlist@src.cx>,  freebsd-stable <freebsd-stable@freebsd.org>
Subject:   Re: zfs send/receive: is this slow?
Message-ID:  <4CA929A8.6000708@langille.org>
In-Reply-To: <4CA68BBD.6060601@langille.org>
References:  <a263c3beaeb0fa3acd82650775e31ee3.squirrel@nyi.unixathome.org>	<45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org>	<AANLkTik0aTDDSNRUBvfX5sMfhW%2B-nfSV9Q89v%2BeJo0ov@mail.gmail.com>	<4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org>	<AANLkTinyHZ1r39AYrV_Wwc2H3B=xMv3vbeDLY2Gc%2Bkez@mail.gmail.com> <4CA68BBD.6060601@langille.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 10/1/2010 9:32 PM, Dan Langille wrote:
> On 10/1/2010 7:00 PM, Artem Belevich wrote:
>> On Fri, Oct 1, 2010 at 3:49 PM, Dan Langille<dan@langille.org> wrote:
>>> FYI: this is all on the same box.
>>
>> In one of the previous emails you've used this command line:
>>> # mbuffer -s 128k -m 1G -I 9090 | zfs receive
>>
>> You've used mbuffer in network client mode. I assumed that you did do
>> your transfer over network.
>>
>> If you're running send/receive locally just pipe the data through
>> mbuffer -- zfs send|mbuffer|zfs receive
>
> As soon as I opened this email I knew what it would say.
>
>
> # time zfs send storage/bacula@transfer | mbuffer | zfs receive
> storage/compressed/bacula-mbuffer
> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
>
>
> $ zpool iostat 10 10
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> storage 9.78T 2.91T 1.11K 336 92.0M 17.3M
> storage 9.78T 2.91T 769 436 95.5M 30.5M
> storage 9.78T 2.91T 797 853 98.9M 78.5M
> storage 9.78T 2.91T 865 962 107M 78.0M
> storage 9.78T 2.91T 828 881 103M 82.6M
> storage 9.78T 2.90T 1023 1.12K 127M 91.0M
> storage 9.78T 2.90T 1.01K 1.01K 128M 89.3M
> storage 9.79T 2.90T 962 1.08K 119M 89.1M
> storage 9.79T 2.90T 1.09K 1.25K 139M 67.8M
>
>
> Big difference. :)

I'm rerunning my test after I had a drive go offline[1].  But I'm not 
getting anything like the previous test:

time zfs send storage/bacula@transfer | mbuffer | zfs receive 
storage/compressed/bacula-buffer

$ zpool iostat 10 10
                capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     6.83T  5.86T      8     31  1.00M  2.11M
storage     6.83T  5.86T    207    481  25.7M  17.8M
storage     6.83T  5.86T    220    516  27.4M  17.2M
storage     6.83T  5.86T    221    523  27.5M  21.0M
storage     6.83T  5.86T    198    430  24.5M  20.4M
storage     6.83T  5.86T    248    528  30.8M  26.7M
storage     6.83T  5.86T    273    508  33.9M  22.6M
storage     6.83T  5.86T    331    499  41.1M  22.7M
storage     6.83T  5.86T    424    662  52.6M  34.7M
storage     6.83T  5.86T    413    605  51.3M  36.7M


[1] - http://docs.freebsd.org/cgi/mid.cgi?4CA73702.5080203


-- 
Dan Langille - http://langille.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4CA929A8.6000708>