From owner-freebsd-fs@FreeBSD.ORG Wed May 15 08:30:21 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id F17637D2 for ; Wed, 15 May 2013 08:30:21 +0000 (UTC) (envelope-from ajit.jain@cloudbyte.com) Received: from mail-oa0-f44.google.com (mail-oa0-f44.google.com [209.85.219.44]) by mx1.freebsd.org (Postfix) with ESMTP id BC675871 for ; Wed, 15 May 2013 08:30:21 +0000 (UTC) Received: by mail-oa0-f44.google.com with SMTP id n12so1830074oag.17 for ; Wed, 15 May 2013 01:30:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type:x-gm-message-state; bh=RVhCBcsexK0FbFVgNsmt6RX8k4+fO6taGt2vr3XBXIg=; b=BPG0utleCBqD/K0fxNqIfUXB9vD52CnTjPSmuA9r4ikfGd1/Qu/lP7nBcZjzI7buh8 44tNjiXkHas3kCnOk+Q8FZfZXbZ9r6djD7Sxw8jZsL6NOYMuHrpTmEiVYAWZuVv7r4It gtyAnE0/Vql4V+RfIpWSob1oGY+/GPa3otaUnFp4fBCg31vqMCPusOYZPxmBvPDrftc5 a2yKifrXQu4vxsXUyPT6C1sQ/z7UQZ47sktiUQooVAw/Jp+8J6QLKr+8Ntj06yMNCM/2 Xv3JY2jqJDmf66g2o3Bk7SdwTvtqbHNrKuek86+yqx7QMpCa2hMEiaB0z8pn5y8vRlmj 71Bw== X-Received: by 10.182.231.197 with SMTP id ti5mr16976491obc.64.1368606620979; Wed, 15 May 2013 01:30:20 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.151.134 with HTTP; Wed, 15 May 2013 01:29:55 -0700 (PDT) In-Reply-To: References: <60316751643743738AB83DABC6A5934B@multiplay.co.uk> <20130429105143.GA1492@icarus.home.lan> <3AD1AB31003D49B2BF2EA7DD411B38A2@multiplay.co.uk> <9681E07546D348168052D4FC5365B4CD@multiplay.co.uk> From: Ajit Jain Date: Wed, 15 May 2013 13:59:55 +0530 Message-ID: Subject: Re: seeing data corruption with zfs trim functionality To: Steven Hartland X-Gm-Message-State: ALoCoQn75pom1Og6vDGUuDCQg5Cl061kpj15FdqUfOnRy9ThTjBljbrAVNWr6t1KRq2tO5RsPjPl Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 Cc: freebsd-fs X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 May 2013 08:30:22 -0000 Hi Steve, One more thing I am not seeing data corruption with SATA SSD (kingston). The issue was seen on SAS SSD i.e. Seagate PULSAR ST100FM0002 . regards, ajit On Wed, May 15, 2013 at 1:49 PM, Ajit Jain wrote: > Hi Steven, > > Please find the tar ball of src code and binary of the test utility > attached with the mail. > Steps of test: > 1. Enable zfs trim in /boot/loader.conf (vfs.zfs.trim_disable=0) > 2. Set the delete method of the ssd device as UNMAP or WS16. > 3. Create a pool (and optionally dataset) on the device. > 4. Run iotest utility with thread count 10 (-t option) file size as at > least 5GB > Second to execute as at least 500 sec (-T option) and write as 100% (W > option). > > regards, > ajit > > > On Wed, May 15, 2013 at 12:56 PM, Steven Hartland > wrote: > >> Could you provide us with details on the tests your using so we can >> run them here on current sources and see if we see any issues? >> >> Regards >> Steve >> >> ----- Original Message ----- From: "Ajit Jain" >> To: "Steven Hartland" >> Cc: "freebsd-fs" >> Sent: Wednesday, May 15, 2013 6:47 AM >> >> Subject: Re: seeing data corruption with zfs trim functionality >> >> >> Hi Steven, >>> >>> Thanks for the follow-up. >>> The code where I pulled in zfs trim patches is not updated to 9 stable >>> specially the cam directory. >>> I pulled in many dependent patches in order to apply the patches that you >>> gave. After that all da devices >>> CAM_PERIPH_INVALID in dadone() because read capability was returning a >>> very >>> big number (bigger than MAXPHYS) >>> for the block size. I think this is because I have not update the code >>> to 9 >>> stable (only pulled in required patches and miss >>> some patches). >>> >>> So, I am planning to first update my code to 9stable and then try the >>> same >>> test again. That might take some time. >>> >>> >>> thanks again, >>> ajit >>> >>> >>> On Wed, May 15, 2013 at 2:40 AM, Steven Hartland < >>> killing@multiplay.co.uk>**wrote: >>> >>> ----- Original Message ----- From: "Steven Hartland" >>>> >>>> What version are you porting the changes to? >>>> >>>>> >>>>>>> What SSD are you using? >>>>>>> >>>>>>> What LSI controller are you using? >>>>>>> >>>>>>> >>>>>> I'd also like to see "zpool status" (for every pool that involves this >>>>>> SSD) and "gpart show" against the disk itself. >>>>>> >>>>>> >>>>> Also: >>>>> 1. What FW version is your LSI? You can get this from dmesg. >>>>> 2. The exact command line your running iotest with? >>>>> >>>>> >>>> Any update on this? I'd like to try and replicate your test here so >>>> would appreciate as much information as possible. >>>> >>>> >>>> Regards >>>> Steve >>>> >>>> ==============================****================== >>>> >>>> This e.mail is private and confidential between Multiplay (UK) Ltd. and >>>> the person or entity to whom it is addressed. In the event of >>>> misdirection, >>>> the recipient is prohibited from using, copying, printing or otherwise >>>> disseminating it or any information contained in it. >>>> In the event of misdirection, illegible or incomplete transmission >>>> please >>>> telephone +44 845 868 1337 >>>> or return the E.mail to postmaster@multiplay.co.uk. >>>> >>>> >>>> >>> >> ==============================**================== >> This e.mail is private and confidential between Multiplay (UK) Ltd. and >> the person or entity to whom it is addressed. In the event of misdirection, >> the recipient is prohibited from using, copying, printing or otherwise >> disseminating it or any information contained in it. >> In the event of misdirection, illegible or incomplete transmission please >> telephone +44 845 868 1337 >> or return the E.mail to postmaster@multiplay.co.uk. >> >> >