Date: Mon, 18 Mar 2013 19:28:07 -0000 From: "Steven Hartland" <killing@multiplay.co.uk> To: <davide.damico@contactlab.com> Cc: freebsd-fs@freebsd.org Subject: Re: FreBSD 9.1 and ZFS v28 performances Message-ID: <897DB64CEBAF4F04AE9C76B3F686E497@multiplay.co.uk> References: <514729BD.2000608@contactlab.com> <810E5C08C2D149DBAC94E30678234995@multiplay.co.uk> <51473D1D.3050306@contactlab.com> <1DD6360145924BE0ABF2D0979287F5F4@multiplay.co.uk> <51474F2F.5040003@contactlab.com> <E106A7DB08744581A08C610BD8A86560@multiplay.co.uk> <51475267.1050204@contactlab.com> <514757DD.9030705@contactlab.com> <42B9D942BA134E16AFDDB564858CA007@multiplay.co.uk> <1bfdea0efb95a7e06554dadf703d58e7@sys.tomatointeractive.it>
next in thread | previous in thread | raw e-mail | index | archive | help
----- Original Message ----- From: "Davide D'Amico" <davide.damico@contactlab.com> >> How does ZFS compare if you do it on 1 SSD as per your second >> UFS test? As I'm wondering the mfi cache is kicking in? > > Well, it was a test :) > > The MFI cache is enabled because I am using mfid* as jbod (mfiutil > create jbod mfid3 mfid4 mfid5 mfid6): Don't use mfiutil to do this it doesnt work it creates mirrors. Use MegaCli instead to create real jbods e.g. MegaCli -AdpSetProp -EnableJBOD -1 -aALL >> >> While running the tests what sort of thing are you >> seeing from gstat, any disks maxing? If so primarily >> read or write? > Here the r/w pattern using zpool iostat 2: > > DATA 52.2G 1.03T 102 0 1.60M 0 > DATA 52.2G 1.03T 7 105 128K 674K ... > DATA 52.2G 1.03T 0 97 0 402K > > And the result from sysbench: > General statistics: > total time: 82.9567s > total number of events: 1 > total time taken by event execution: 82.9545s Thats hardly doing any disk access at all, so odd it would be doubling your benchmark time. > Using a SSD: > # iostat mfid2 -x 2 > tty mfid2 cpu > tin tout KB/t tps MB/s us ni sy in id > 0 32 125.21 31 3.84 0 0 0 0 99 > 0 170 0.00 0 0.00 1 0 0 0 99 > 0 22 0.00 0 0.00 3 0 2 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 32.00 2 0.08 3 0 1 0 96 > 0 22 32.00 0 0.02 3 0 1 0 96 > 0 22 4.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 2 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 1 0 96 > 0 22 0.00 0 0.00 3 0 2 0 96 > 0 22 44.80 67 2.95 3 0 1 0 96 > 0 22 87.58 9 0.81 3 0 2 0 96 > 0 22 32.00 3 0.09 2 0 2 0 96 > 0 585 0.00 0 0.00 3 0 1 0 96 > 0 22 4.00 0 0.00 0 0 0 0 100 > > And the result from sysbench: > General statistics: > total time: 36.1146s > total number of events: 1 > total time taken by event execution: 36.1123s > > That are the same results using SAS disks. So this is ZFS on the SSD, resulting the same benchmark results as UFS? Regards Steve ================================================ This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337 or return the E.mail to postmaster@multiplay.co.uk.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?897DB64CEBAF4F04AE9C76B3F686E497>