Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 12 Jan 2010 22:49:46 +0200
From:      Dan Naumov <dan.naumov@gmail.com>
To:        freebsd@o2.pl, FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org>,  freebsd-questions@freebsd.org, freebsd-fs@freebsd.org
Subject:   Re: ZFS on top of GELI
Message-ID:  <cf9b1ee01001121249k7a31ce9fufd1d499d4f08d691@mail.gmail.com>
In-Reply-To: <201001122127.56716.freebsd@o2.pl>
References:  <cf9b1ee01001100708m7851418cmbb77cc3580d0fab3@mail.gmail.com> <201001120100.16631.freebsd@o2.pl> <cf9b1ee01001111615m21ad462elc0d40a66e3913198@mail.gmail.com> <201001122127.56716.freebsd@o2.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
2010/1/12 Rafa=C5=82 Jackiewicz <freebsd@o2.pl>:
>>Thanks, could you do the same, but using 2 .eli vdevs mirrorred
>>together in a zfs mirror?
>>
>>- Sincerely,
>>Dan Naumov
>
> Hi,
>
> Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threa=
ds
> Chipset: Intel 82945G
> Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
> empty file: /boot/loader.conf
> Hdd:
> =C2=A0 ad4: 953869MB <Seagate ST31000533CS SC15> at ata2-master SATA150
> =C2=A0 ad6: 953869MB <Seagate ST31000533CS SC15> at ata3-master SATA150
> Geli:
> =C2=A0 geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
> =C2=A0 geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2
>
>
> Results:
> ****************************************************
>
> *** single drive =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0write MB/s =C2=A0 =C2=A0 =C2=A0read =C2=A0MB/s
> eli.journal.ufs2 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A023 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A014
> eli.zfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 19 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A036
>
>
> *** mirror =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write MB/s =C2=A0 =C2=A0 =C2=A0re=
ad =C2=A0MB/s
> mirror.eli.journal.ufs2 23 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A016
> eli.zfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 31 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A040
> zfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 83 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A079
>
>
> *** degraded mirror =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 write MB/s =
=C2=A0 =C2=A0 =C2=A0read MB/s
> mirror.eli.journal.ufs2 16 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A09
> eli.zfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 56 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A040
> zfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 86 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A071
>
> ****************************************************

Thanks a lot for your numbers, the relevant part for me was this:

*** mirror                      write MB/s      read  MB/s
eli.zfs                         	31              40
zfs                                     83              79

*** degraded mirror             write MB/s      read MB/s
eli.zfs                         	56              40
zfs                                     86              71

31 mb/s writes and 40 mb/s reads is something that I guess I could
potentially live with. I am guessing the main problem of stacking ZFS
on top of geli like this is the fact that writing to a mirror requires
double the CPU use, because we have to encrypt all written data twice
(once to each disk) instead of encrypting first and then writing the
encrypted data to 2 disks as would be the case if we had crypto
sitting on top of ZFS instead of ZFS sitting on top of crypto.

I now have to reevaluate my planned use of an SSD though, I was
planning to use a 40gb partition on an Intel 80GB X25-M G2 as a
dedicated L2ARC device for a ZFS mirror of 2 x 2tb disks. However
these numbers make it quite obvious that I would already be
CPU-starved at 40-50mb/s throughput on the encrypted ZFS mirror, so
adding an l2arc SSD, while improving latency, would do really nothing
for actual disk read speeds, considering the l2arc itself would too,
have to sit on top of a GELI device.

- Sincerely,
Dan Naumov



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cf9b1ee01001121249k7a31ce9fufd1d499d4f08d691>