From owner-freebsd-current@FreeBSD.ORG Sat Jan 19 14:47:10 2013 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 8B55ECA4 for ; Sat, 19 Jan 2013 14:47:10 +0000 (UTC) (envelope-from bland@bbnest.net) Received: from mail2.asahi-net.or.jp (mail2.asahi-net.or.jp [202.224.39.198]) by mx1.freebsd.org (Postfix) with ESMTP id 4815DE87 for ; Sat, 19 Jan 2013 14:47:09 +0000 (UTC) Received: from eee.bbnest.net (w133033.ppp.asahi-net.or.jp [121.1.133.33]) by mail2.asahi-net.or.jp (Postfix) with ESMTP id 669CC18220 for ; Sat, 19 Jan 2013 23:26:50 +0900 (JST) Received: from nest.bbnest.net (w133033.ppp.asahi-net.or.jp [121.1.133.33]) (authenticated bits=0) by eee.bbnest.net (8.14.6/8.14.5) with ESMTP id r0JEQaoj004807 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Sat, 19 Jan 2013 23:26:37 +0900 (JST) (envelope-from bland@bbnest.net) From: Alexander Nedotsukov Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: ZFS + usb in trouble? Message-Id: Date: Sat, 19 Jan 2013 23:26:39 +0900 To: FreeBSD Current Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) X-Mailer: Apple Mail (2.1499) X-DSPAM-Result: Whitelisted X-DSPAM-Processed: Sat Jan 19 23:26:50 2013 X-DSPAM-Confidence: 0.9992 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 50faad2a48082118611110 X-Mailman-Approved-At: Sat, 19 Jan 2013 15:45:21 +0000 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Jan 2013 14:47:10 -0000 Hi All, Just a note that after catch up with -current my zfs pool kissed good = bye. I'll omit details about its last days and go strait to the final = state: Creating pool from scratch: #zpool create tank raidz da{1..3} #zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 errors: No known data errors #zfs list NAME USED AVAIL REFER MOUNTPOINT tank 140K 3,56T 40,0K /tank Let's use some space out of it. #dd if=3D/dev/zero of=3D/tank/foo ^C250939+0 records in 250938+0 records out 128480256 bytes transferred in 30.402453 secs (4225983 bytes/sec) Oops... #zpool status pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are = unaffected. action: Determine if the device needs to be replaced, and clear the = errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 5K in 0h0m with 0 errors on Sat Jan 19 23:11:20 = 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 da1 ONLINE 0 0 1 da2 ONLINE 0 0 0 da3 ONLINE 0 0 1 At some state (more data copied) it is enough to do another scrub run to = trigger new cksum errors / unrecoverable file loss. I do not see any error messages from kernel and smartctl output has zero = error counters. Full memtest cycle seems to be all right. Kernel built with gcc is suffering from same sympthoms. Tried to create raidz pool out of files and it worked fine (even placed = one chunk to UFS made out of da0).=20 Any idea what it can be? Last kernel which did work was back from October 2012. Thanks, Alexander.=