Date: Thu, 28 Nov 2013 11:55:51 +1100 From: Jan Mikkelsen <janm@transactionware.com> To: krichy@cflinux.hu Cc: freebsd-scsi@freebsd.org Subject: Re: ssd for zfs Message-ID: <EFA62FB6-5E33-44B6-9D3E-96E9CBF3DA9B@transactionware.com> In-Reply-To: <0b12c19b8832c72369ff7244d7231846@cflinux.hu> References: <0b12c19b8832c72369ff7244d7231846@cflinux.hu>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, Using the drive write cache seems like a really bad idea for a ZIL. The purpose of a ZIL is to keep a log of actions for recovery after a = system crash. The drive write cache lets the drive lie to the operating = system about whether or not a write has been made durable. If you have a = power failure while you have 1400 writes outstanding to the drive you = might find that you have data loss on restart. For a ZIL you are best off with a drive that has a supercapacitor to = ensure all outstanding writes can be completed on power loss. For = example, the Intel S3700 series. Performance on your zpool is probably limited by the number of vdevs you = have. More vdevs give more I/O parallelism. If you only have one vdev = you will be limited to single drive throughput. Depending on the number = of drives you have and what you need, you will either need a bunch of = mirrored vdevs or a bunch of raidz2 vdevs (if you have enough drives). I made this mistake early on, thinking a raidz2 vdev alone would give = parallelism. You need multiple vdevs. Regards, Jan Mikkelsen janm@transactionware.com On 28 Nov 2013, at 1:14 am, krichy@cflinux.hu wrote: >=20 >=20 > -------- Eredeti =FCzenet -------- > T=E1rgy: Re: ssd for zfs > D=E1tum: 2013-11-27 14:07 > Felad=F3: Richard Kojedzinszky <krichy@cflinux.hu> > C=EDmzett: Tom Evans <tevans.uk@googlemail.com> > M=E1solat: FreeBSD FS <freebsd-fs@freebsd.org> >=20 > Dear FS devs, >=20 > After some investigation, it turned out that when I turn write-cache = off under linux, the performance drops to 100 on that OS also. But when = enabled, 1400 IOPS (synchronous) can be achieved. So I would like to see = the same on FreeBSD as well. Using camcontrol shows that the write cache = is enabled, but I may assume that something around this is causing the = performance degradation. But unfortunately I cannot step forward right = now. >=20 > Regards, >=20 > Kojedzinszky Richard >=20 > On Wed, 27 Nov 2013, Tom Evans wrote: >=20 >> On Wed, Nov 27, 2013 at 8:51 AM, Richard Kojedzinszky = <krichy@cflinux.hu> wrote: >>> Dear fs developers, >>> Probably this is not the best list to report my issue, but please = forward it >>> to where it should get. >>> I bought an SSD for my ZFS filesystem to use it as a ZIL. I've = tested it >>> under linux, and found that it can handle around 1400 random = synchronized >>> write IOPS. Then I placed it into my freebsd 9.2 box, and after = attaching it >>> as a ZIL, my zpool only performs 100 (!) write iops. I've attached = it to an >>> AHCI controller and to an LSI 1068 controller, on both it behaves = the same. >>> So I expect that something in the scsi layer is different, FreeBSD = is >>> handling this device slower, but actually it can handle the 1400 = iops as >>> tested under linux. >>> Please give some advice where to go, how to debug, and how to = improve >>> FreeBSD's performance with this drive. >> The ZIL is only used for synchronous writes. The majority of writes >> are asynchronous, and the ZIL is not used at all. Plus, a ZIL can = only >> increase iops by bundling writes - if your underlying pool is write >> saturated already, then a ZIL can't help - any data written to the = ZIL >> has to end up on the pool. >> Test the SSD by itself under FreeBSD to rule out FreeBSD not working >> correctly on the SSD (I doubt this though). >> Cheers >> Tom > _______________________________________________ > freebsd-scsi@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-scsi > To unsubscribe, send any mail to = "freebsd-scsi-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EFA62FB6-5E33-44B6-9D3E-96E9CBF3DA9B>