Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Oct 2010 18:49:17 -0400
From:      Dan Langille <dan@langille.org>
To:        Artem Belevich <fbsdlist@src.cx>
Cc:        "freebsd-stable@freebsd.org" <freebsd-stable@freebsd.org>
Subject:   Re: zfs send/receive: is this slow?
Message-ID:  <4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF@langille.org>
In-Reply-To: <AANLkTik0aTDDSNRUBvfX5sMfhW%2B-nfSV9Q89v%2BeJo0ov@mail.gmail.com>
References:  <a263c3beaeb0fa3acd82650775e31ee3.squirrel@nyi.unixathome.org> <45cfd27021fb93f9b0877a1596089776.squirrel@nyi.unixathome.org> <AANLkTik0aTDDSNRUBvfX5sMfhW%2B-nfSV9Q89v%2BeJo0ov@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
FYI: this is all on the same box. 

-- 
Dan Langille
http://langille.org/


On Oct 1, 2010, at 5:56 PM, Artem Belevich <fbsdlist@src.cx> wrote:

> Hmm. It did help me a lot when I was replicating ~2TB worth of data
> over GigE. Without mbuffer things were roughly in the ballpark of your
> numbers. With mbuffer I've got around 100MB/s.
> 
> Assuming that you have two boxes connected via ethernet, it would be
> good to check that nobody generates PAUSE frames. Some time back I've
> discovered that el-cheapo switch I've been using for some reason could
> not keep up with traffic bursts and generated tons of PAUSE frames
> that severely limited throughput.
> 
> If you're using Intel adapters, check xon/xoff counters in "sysctl
> dev.em.0.mac_stats". If you see them increasing, that may explain slow
> speed.
> If you have a switch between your boxes, try bypassing it and connect
> boxes directly.
> 
> --Artem
> 
> 
> 
> On Fri, Oct 1, 2010 at 11:51 AM, Dan Langille <dan@langille.org> wrote:
>> 
>> On Wed, September 29, 2010 2:04 pm, Dan Langille wrote:
>>> $ zpool iostat 10
>>>                capacity     operations    bandwidth
>>> pool         used  avail   read  write   read  write
>>> ----------  -----  -----  -----  -----  -----  -----
>>> storage     7.67T  5.02T    358     38  43.1M  1.96M
>>> storage     7.67T  5.02T    317    475  39.4M  30.9M
>>> storage     7.67T  5.02T    357    533  44.3M  34.4M
>>> storage     7.67T  5.02T    371    556  46.0M  35.8M
>>> storage     7.67T  5.02T    313    521  38.9M  28.7M
>>> storage     7.67T  5.02T    309    457  38.4M  30.4M
>>> storage     7.67T  5.02T    388    589  48.2M  37.8M
>>> storage     7.67T  5.02T    377    581  46.8M  36.5M
>>> storage     7.67T  5.02T    310    559  38.4M  30.4M
>>> storage     7.67T  5.02T    430    611  53.4M  41.3M
>> 
>> Now that I'm using mbuffer:
>> 
>> $ zpool iostat 10
>>               capacity     operations    bandwidth
>> pool         used  avail   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> storage     9.96T  2.73T  2.01K    131   151M  6.72M
>> storage     9.96T  2.73T    615    515  76.3M  33.5M
>> storage     9.96T  2.73T    360    492  44.7M  33.7M
>> storage     9.96T  2.73T    388    554  48.3M  38.4M
>> storage     9.96T  2.73T    403    562  50.1M  39.6M
>> storage     9.96T  2.73T    313    468  38.9M  28.0M
>> storage     9.96T  2.73T    462    677  57.3M  22.4M
>> storage     9.96T  2.73T    383    581  47.5M  21.6M
>> storage     9.96T  2.72T    142    571  17.7M  15.4M
>> storage     9.96T  2.72T     80    598  10.0M  18.8M
>> storage     9.96T  2.72T    718    503  89.1M  13.6M
>> storage     9.96T  2.72T    594    517  73.8M  14.1M
>> storage     9.96T  2.72T    367    528  45.6M  15.1M
>> storage     9.96T  2.72T    338    520  41.9M  16.4M
>> storage     9.96T  2.72T    348    499  43.3M  21.5M
>> storage     9.96T  2.72T    398    553  49.4M  14.4M
>> storage     9.96T  2.72T    346    481  43.0M  6.78M
>> 
>> If anything, it's slower.
>> 
>> The above was without -s 128.  The following used that setting:
>> 
>>  $ zpool iostat 10
>>               capacity     operations    bandwidth
>> pool         used  avail   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> storage     9.78T  2.91T  1.98K    137   149M  6.92M
>> storage     9.78T  2.91T    761    577  94.4M  42.6M
>> storage     9.78T  2.91T    462    411  57.4M  24.6M
>> storage     9.78T  2.91T    492    497  61.1M  27.6M
>> storage     9.78T  2.91T    632    446  78.5M  22.5M
>> storage     9.78T  2.91T    554    414  68.7M  21.8M
>> storage     9.78T  2.91T    459    434  57.0M  31.4M
>> storage     9.78T  2.91T    398    570  49.4M  32.7M
>> storage     9.78T  2.91T    338    495  41.9M  26.5M
>> storage     9.78T  2.91T    358    526  44.5M  33.3M
>> storage     9.78T  2.91T    385    555  47.8M  39.8M
>> storage     9.78T  2.91T    271    453  33.6M  23.3M
>> storage     9.78T  2.91T    270    456  33.5M  28.8M
>> 
>> 
>> _______________________________________________
>> freebsd-stable@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
>> 
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C511EF8-591C-4BB9-B7AA-30D5C3DDC0FF>