Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 8 Mar 2016 16:05:13 -0800
From:      Liam Slusser <lslusser@gmail.com>
To:        zfs@lists.illumos.org
Cc:        "smartos-discuss@lists.smartos.org" <smartos-discuss@lists.smartos.org>, developer@lists.open-zfs.org, developer <developer@open-zfs.org>, illumos-developer <developer@lists.illumos.org>,  omnios-discuss <omnios-discuss@lists.omniti.com>,  Discussion list for OpenIndiana <openindiana-discuss@openindiana.org>,  "zfs-discuss@list.zfsonlinux.org" <zfs-discuss@list.zfsonlinux.org>,  "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, "zfs-devel@freebsd.org" <zfs-devel@freebsd.org>
Subject:   Re: [zfs] [developer] Re: [smartos-discuss] an interesting survey -- the zpool with most disks you have ever built
Message-ID:  <CAESZ%2B_8JpsAbu=vcpa%2BDYFwjHzs-7X2QEvaHVH7B=_SPKg971A@mail.gmail.com>
In-Reply-To: <CALi05XzuODjdbmufSfaCEYRmRZiS4T3dwwcD2oW6NLBNZx=Y0Q@mail.gmail.com>
References:  <95563acb-d27b-4d4b-b8f3-afeb87a3d599@me.com> <CACTb9pxJqk__DPN_pDy4xPvd6ETZtbF9y=B8U7RaeGnn0tKAVQ@mail.gmail.com> <CAJjvXiH9Wh%2BYKngTvv0XG1HtikWggBDwjr_MCb8=Rf276DZO-Q@mail.gmail.com> <56D87784.4090103@broken.net> <A5A6EA4AE9DCC44F8E7FCB4D6317B1D203178F1DD392@SH-MAIL.ISSI.COM> <5158F354-9636-4031-9536-E99450F312B3@RichardElling.com> <CALi05Xxm9Sdx9dXCU4C8YhUTZOwPY%2BNQqzmMEn5d0iFeOES6gw@mail.gmail.com> <6E2B77D1-E0CA-4901-A6BD-6A22C07536B3@gmail.com> <CALi05Xw1NGqZhXcS4HweX7AK0DU_mm01tj=rjB%2BqOU9N0-N=ng@mail.gmail.com> <CAESZ%2B_-%2B1jKQC880bew-maDyZ_xnMmB7QxPHyKAc_3P44%2Bm%2BuQ@mail.gmail.com> <CALi05XzuODjdbmufSfaCEYRmRZiS4T3dwwcD2oW6NLBNZx=Y0Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Fred -

We don't use any cluster software.  Our backup server is just a full copy
of our data and nothing more.  So in the event of a failure of the master
our server clients don't automatically fail over or anything nifty like
that.  This filer isn't customer facing, so in the event of a failure of
the master there is no customer impact.  We use a slightly modified zrep to
handle the replication between the two.

thanks,
liam



> [Fred]: zpool wiith 280 drives in production is pretty big! I think 2000
>> drives were just in test. It is true that huge pools have lots of operation
>> challenges. I have met the similar sluggish issue caused by a
>>
>                will-die disk.  Just curious, what is the cluster software
> implemented in
> http://everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
>  ?
>
>     Thanks.
>
> Fred
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAESZ%2B_8JpsAbu=vcpa%2BDYFwjHzs-7X2QEvaHVH7B=_SPKg971A>