Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Mar 2015 18:42:28 +0100
From:      InterNetX - Juergen Gotteswinter <jg@internetx.com>
To:        FreeBSD FS <freebsd-fs@freebsd.org>,  "freebsd-hardware@freebsd.org" <freebsd-hardware@freebsd.org>
Subject:   Re: Zoned Commands ZBC/ZAC, Shingled SMR drives, ZFS
Message-ID:  <5511A204.7020705@internetx.com>
In-Reply-To: <02F3A553C174554DA1D5EC7CEE9BDDD7011BC3E42B@loki.lvc.com>
References:  <CAD2Ti2_kQhnDL9nvdCT-zMG1bPeLSqTHemn%2B9hueRDLotmsLmw@mail.gmail.com> <CAPLK-i9OSv4ng-6Bwqc%2Byne%2BimDx_veOkwZij=hC1jLrE_ZUJw@mail.gmail.com> <CAFHbX1KLSuES8rJ1Nzho7g8kj-mD_vRAuCyYvdEK9xpz49QBZA@mail.gmail.com> <02F3A553C174554DA1D5EC7CEE9BDDD7011BC3E42B@loki.lvc.com>

next in thread | previous in thread | raw e-mail | index | archive | help
The HGST He8 HDDs completed its rebuild in 19 hours and 46 minutes. The
Seagate Archive HDDs completed their rebuild in 57 hours and 13 minutes

this is ... a feature. right?

Am 24.03.2015 um 18:05 schrieb Dale Kline:
> READ THE DOCUMENTATION THOROUGHLY on these SMR drives.   There are serious WRITE restrictions on these drives because of the overlapping (shingled) tracks.  I have read over several times and am still not sure of all of the caveats.  As Tom states below,  they are to be used mainly in "WRITE ONCE,  READ MANY" environments.
> 
> -----Original Message-----
> From: owner-freebsd-hardware@freebsd.org [mailto:owner-freebsd-hardware@freebsd.org] On Behalf Of Tom Evans
> Sent: Tuesday, March 24, 2015 12:39 PM
> To: Shehbaz Jaffer
> Cc: FreeBSD FS; grarpamp; freebsd-hardware@freebsd.org
> Subject: Re: Zoned Commands ZBC/ZAC, Shingled SMR drives, ZFS
> 
> On Tue, Mar 24, 2015 at 12:47 PM, Shehbaz Jaffer <shehbazjaffer007@gmail.com> wrote:
>> Hi,
>>
>> I was wondering what cost advantage do SMR drive provide as compared 
>> to normal CMR drive?
>>
>> 8TB SMR drive - $ 260
>> 3TB CMR (Conventional Magnetic Recording drive) - $ 105
>>
> 
> Purchase price is not irrelevant, but the key benefits are increased capacity per disk, and reduced power usage per disk and (multiplied by the increase in capacity) per TB. In other words, they disks consume less power, you need fewer of them, maybe allowing you to run fewer servers.
> 
> Of course, you also need a mainly read only workload. The RAID rebuild test from the linked review is *scary*. I wouldn't use these in ZFS raidz without plenty of disaster recovery testing - how long does it take to re-silver the pool when you lose a disk and what is the performance characteristics of the pool whilst it is doing so.
> 
> Cheers
> 
> Tom
> _______________________________________________
> freebsd-hardware@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
> To unsubscribe, send any mail to "freebsd-hardware-unsubscribe@freebsd.org"
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5511A204.7020705>