Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Sep 2014 23:28:41 -0500
From:      Scott Bennett <>
To:        Andrew Berg <>
Subject:   Re: ZFS and 2 TB disk drive technology :-(
Message-ID:  <>

Next in thread | Raw E-Mail | Index | Archive | Help
     Thank you for your reply.
     On Wed, 24 Sep 2014 06:37:30 -0500 Andrew Berg
<> wrote:
>On 2014.09.24 06:08, Scott Bennett wrote:
>>      If anyone reading this has any suggestions for a course of action
>> here, I'd be most interested in reading them.  Thanks in advance for any
>> ideas and also for any corrections if I've misunderstood what a ZFS
>> mirror was supposed to have done to preserve the data and maintain
>> correct operation at the application level.
>I skimmed over the long message, and my first thought is that you have a messed
>up controller that is lying. I've run into such a controller on a hard drive

     Yes, this thought has crossed my mind, too.  I don't think it explains
all of the evidence well, but OTOH, I can't quite rule it out yet either.
The error rate appears to differ from drive to drive.

>enclosure that is supposed to support disks larger than 2TB, but seems to write
>to who knows where when you want a sector beyond 2TB, and the filesystem layer
>has no idea anything is wrong. This is all just an educated, guess, but

     I ran into that problem last year when I had a 3 TB drive put into a
case with the interface combination that I wanted.  Someone on the list clued
me in about old controllers, so we checked, and sure enough, the controller
in that case was unable to handle devices larger than 2 TB.  In the current
situation, however, all four drives are 2 TB drives.

>considering you get errors at a level below ZFS (would it be called the CAM
>layer?), my advice would be to check the controllers and perhaps even the disks
>themselves. AFAIK, issues at that layer are rarely software ones.

     Yes, as noted in earlier threads, I've already seen the problem of
undetected write errors on these drives without ZFS being involved, just
UFS2.  I had been planning to set up a gvinum raid5 device until I realized
that protection against loss of a drive would not protect me from corruption
of data without loss.  Although running a parity check on the raid5 device
should reveal errors, it would not fix them, whereas the claim was made that
raidzN would fix them.
     So I decided to try ZFS in hopes that errors would be both detected
when they occurred (they are not) and corrected upon detection (they appear
not to be, regardless of what ZFS scrub results say).
     Meanwhile, Seagate has not seemed willing to replace running drives
that have errors with running drives that have been tested and shown to
have no errors. :-(  That leaves me with my money spent on equipment that
does not work properly.
     I may have to buy another card and swap it with the one that is in the
tower at present, but I'd rather not do that unless I find better evidence
that the problems come from the card I have now, especially given that much
of the evidence thus far gathered points to the quality of the drives as
the culprit.
     I suppose I could reconnect the USB 3.0 drives to USB 2.0 ports and
then repeat my tests, but I'm already kind of fed up with all the delays.
Also, as previously noted, the Western Digital drive is connected via
Firewire 400 and is showing scrub errors as well, albeit comparatively few.
It has been several months now since I last had a place to write backups,
and the lack of recent backups is giving me the heebie jeebies more and
more by the day.

                                  Scott Bennett, Comm. ASMELG, CFIAG
* Internet:   bennett at   *xor*   bennett at  *
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."                                               *
*    -- Gov. John Hancock, New York Journal, 28 January 1790         *

Want to link to this message? Use this URL: <>