Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 7 Mar 2016 14:18:24 +0800
From:      Fred Liu <fred.fliu@gmail.com>
To:        "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org>
Cc:        illumos-zfs <zfs@lists.illumos.org>, developer@lists.open-zfs.org,  developer <developer@open-zfs.org>, illumos-developer <developer@lists.illumos.org>,  omnios-discuss <omnios-discuss@lists.omniti.com>,  Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>,  "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>,  "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>
Subject:   Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built
Message-ID:  <CALi05Xw1NGqZhXcS4HweX7AK0DU_mm01tj=rjB%2BqOU9N0-N=ng@mail.gmail.com>
In-Reply-To: <6E2B77D1-E0CA-4901-A6BD-6A22C07536B3@gmail.com>
References:  <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <CACTb9pxJqk__DPN_pDy4xPvd6ETZtbF9y=B8U7RaeGnn0tKAVQ@mail.gmail.com> <CAJjvXiH9Wh%2BYKngTvv0XG1HtikWggBDwjr_MCb8=Rf276DZO-Q@mail.gmail.com> <56D87784.4090103@broken.net> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD392@SH-MAIL.ISSI.COM> <5158F354-9636-4031-9536-E99450F312B3@RichardElling.com> <CALi05Xxm9Sdx9dXCU4C8YhUTZOwPY%2BNQqzmMEn5d0iFeOES6gw@mail.gmail.com> <6E2B77D1-E0CA-4901-A6BD-6A22C07536B3@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2016-03-07 14:04 GMT+08:00 Richard Elling <richard.elling@gmail.com>:

>
> On Mar 6, 2016, at 9:06 PM, Fred Liu <fred.fliu@gmail.com> wrote:
>
>
>
> 2016-03-06 22:49 GMT+08:00 Richard Elling <
> richard.elling@richardelling.com>:
>
>>
>> On Mar 3, 2016, at 8:35 PM, Fred Liu <Fred_Liu@issi.com> wrote:
>>
>> Hi,
>>
>> Today when I was reading Jeff's new nuclear weapon -- DSSD D5's CUBIC
>> RAID introduction,
>> the interesting survey -- the zpool with most disks you have ever built
>> popped in my brain.
>>
>>
>> We test to 2,000 drives. Beyond 2,000 there are some scalability issues
>> that impact failover times.
>> We=E2=80=99ve identified these and know what to fix, but need a real cus=
tomer at
>> this scale to bump it to
>> the top of the priority queue.
>>
>> [Fred]: Wow! 2000 drives almost need 4~5 whole racks!
>
>>
>> For zfs doesn't support nested vdev, the maximum fault tolerance should
>> be three(from raidz3).
>>
>>
>> Pedantically, it is N, because you can have N-way mirroring.
>>
>
> [Fred]: Yeah. That is just pedantic. N-way mirroring of every disk works
> in theory and rarely happens in reality.
>
>>
>> It is stranded if you want to build a very huge pool.
>>
>>
>> Scaling redundancy by increasing parity improves data loss protection by
>> about 3 orders of
>> magnitude. Adding capacity by striping reduces data loss protection by
>> 1/N. This is why there is
>> not much need to go beyond raidz3. However, if you do want to go there,
>> adding raidz4+ is
>> relatively easy.
>>
>
> [Fred]: I assume you used stripped raidz3 vedvs in your storage mesh of
> 2000 drives. If that is true, the possibility of 4/2000 will be not so lo=
w.
>            Plus, reslivering takes longer time if single disk has bigger
> capacity. And further, the cost of over-provisioning spare disks vs raidz=
4+
> will be an deserved
>             trade-off when the storage mesh at the scale of 2000 drives.
>
>
> Please don't assume, you'll just hurt yourself :-)
> For example, do not assume the only option is striping across raidz3
> vdevs. Clearly, there are many
> different options.
>

[Fred]:  Yeah. Assumptions always go far way from facts! ;-) Is designing a
storage mesh with 2000 drives biz secret? Or it is just too complicate to
elaborate?
Never mind. ;-)

Thanks.

Fred


>
> *smartos-discuss* | Archives
> <https://www.listbox.com/member/archive/184463/=3Dnow>;
> <https://www.listbox.com/member/archive/rss/184463/22027488-c60da3c5>; |
> Modify
> <https://www.listbox.com/member/?member_id=3D22027488&id_secret=3D2202748=
8-68892e07>
> Your Subscription <http://www.listbox.com>;
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALi05Xw1NGqZhXcS4HweX7AK0DU_mm01tj=rjB%2BqOU9N0-N=ng>