Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 Sep 2015 15:38:05 -0500
From:      Graham Allan <allan@physics.umn.edu>
To:        =?UTF-8?B?S2FybGkgU2rDtmJlcmc=?= <karli.sjoberg@slu.se>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: Cannot replace broken hard drive with LSI HBA
Message-ID:  <560AF6AD.3010803@physics.umn.edu>
In-Reply-To: <1443507440.5271.72.camel@data-b104.adm.slu.se>
References:  <1443447383.5271.66.camel@data-b104.adm.slu.se>	 <5609578E.1050606@physics.umn.edu> <1443507440.5271.72.camel@data-b104.adm.slu.se>

next in thread | previous in thread | raw e-mail | index | archive | help
On 9/29/2015 1:17 AM, Karli Sjöberg wrote:
>>
>> Regarding your experience with firmware 20, I believe it is "known bad",
>> though some seem to disagree. Certainly when building my recent-ish
>> large 9.3 servers I specifically tested it and got consistent data
>> corruption. There is now a newer release of firmware 20 , "20.00.04.00"
>> which seems to be fixed - see this thread:
>>
>> https://lists.freebsd.org/pipermail/freebsd-scsi/2015-August/006793.html
>
> No, firmware 20.00.04.00 and driver 20.00.00.00-fbsd was the one that
> was used when ZFS freaked out, so it´s definitely not fixed.
>
> I think this calls for a bug report.

That is curious, since I could rapidly get data corruption with firmware 
20.00.00.00, yet ran a stress test for about a week with 20.00.04.00 
with no issues. That was with FreeBSD 9.3, but I just updated my test 
system to 10.2, and it has been running the same stress test for 4-5 
hours again with no issues. I don't doubt your experience at all, of 
course, but I wonder what is different?

For what it's worth, my test machine is a Dell R610 with Dell TYVGC HBA 
(unclear whether this is a 9207-8e or 9205-8e), and WD Red drives in a 
Supermicro SC847 chassis.

Graham




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?560AF6AD.3010803>