From owner-freebsd-stable@FreeBSD.ORG Fri Feb 14 12:44:35 2014 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id BA81C3EC; Fri, 14 Feb 2014 12:44:35 +0000 (UTC) Received: from cu01176a.smtpx.saremail.com (cu01176a.smtpx.saremail.com [195.16.150.151]) by mx1.freebsd.org (Postfix) with ESMTP id 794121389; Fri, 14 Feb 2014 12:44:35 +0000 (UTC) Received: from [172.16.2.2] (izaro.sarenet.es [192.148.167.11]) by proxypop03.sare.net (Postfix) with ESMTPSA id DC0179DC7B1; Fri, 14 Feb 2014 13:44:27 +0100 (CET) From: Borja Marcos Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Subject: FreeBSD 10, ServeRAID M5210e, syspd corruption Date: Fri, 14 Feb 2014 13:44:26 +0100 Message-Id: To: freebsd-scsi@freebsd.org Mime-Version: 1.0 (Apple Message framework v1283) X-Mailer: Apple Mail (2.1283) Cc: Stable Stable X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Feb 2014 12:44:35 -0000 (crossposting to -Stable just in case) Hello, I am configuring an IBM server with FreeBSD 10-RELEASE, a ServeRAID = M5210e and 23 SSD disks. uname -a FreeBSD hostname 10.0-RELEASE FreeBSD 10.0-RELEASE #1: Fri Feb 14 = 09:35:12 CET 2014 toor@ hostname:/usr/obj/usr/src/sys/GENERIC amd64 The server has a SAS backplane and a controller recognized by the mfi = driver. mfi0 Adapter: Product Name: ServeRAID M5210e Serial Number: 3CJ0SG =20 Firmware: 24.0.2-0013 RAID Levels: JBOD, RAID0, RAID1, RAID10 Battery Backup: not present NVRAM: 32K Onboard Memory: 0M Minimum Stripe: 64K Maximum Stripe: 64K As I am intending to use ZFS, I need direct access to the disks, no need = for fancy RAID features. I have seen that the newest cards support a so-called "syspd" mode that = gives direct acess to the disks. However, in this configuration, syspd consistently corrupts data on the = disks. I have done tests with three models of disks: - Samsung SSD 840 BB0Q (1 TB) - OCZ-VERTEX4 1.5 (512 GB) - SEAGATE ST9146803SS FS03 (136 GB) In the three cases there is data corruption. Using FFS on the disks = results in a panic if I run a benchmark, for example, bonnie++. Using ZFS (I've been creating one disk pools to test) I don't get panics = but the data is consistently corrupted. The writes work, but whenever there is read activity (either bonnie++ reaching the "rewrite" phase, or = a ZFS scrub), ZFS detects data corruption. Trying the =FCber neat hw.mfi.allow_cam_disk_passthrough (which is = great, because ZFS can detect the SSDs and issue TRIM commands) I get the same result: data corruption. However, I have tried to create a one-disk raid0 volume, and in that = case it works like a charm, no corruption at all, so I can safely assume that this is not a defective backplane, expander or cabling.=20 So: mfisyspd -> CORRUPT da -> CORRUPT mfid -> NOT CORRUPT Any ideas? Could be a driver error or a firmware problem, I am clueless = for now. Anything I can test? The machine is not in production, I can try patches = or whatever. Thanks!! Borja.