Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Dec 2008 18:48:16 -0600 (CST)
From:      Wes Morgan <morganw@chemikals.org>
To:        Matt Simerson <matt@corp.spry.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS performance gains real or imaginary?
Message-ID:  <alpine.BSF.2.00.0812181732440.14585@ibyngvyr.purzvxnyf.bet>
In-Reply-To: <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com>
References:  <22C8092E-210F-4E91-AA09-CFD38966975C@spry.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 18 Dec 2008, Matt Simerson wrote:

> ZFS under FreeBSD 7 is horrendously slow. It took almost two days to copy 
> 600GB of data (a bunch of MP3s, movies, and UFS backups of my servers in data 
> centers) to the ZFS volume. Once completed, I removed the old disks. The file 
> system performance after switching to ZFS is quite underwhelming. I notice it 
> when doing any sort of writes to it.  This echoes my experience with ZFS on 
> my production backup servers at work. (all systems are multi-core Intel with 
> 4GB+ RAM).

That sounds completely contrary to my experience. I was able to migrate a 
1.3 TB 6-disk raidz to a 8-disk raidz2, so the data had to come off and go 
back on. Took about 12-14 hours in total.

My original setup included an SiS 2-port PCI SATA controller, which was a 
dog. Upgrading to a better setup improved the write performance 
drastically. But I don't think I load my systems down quite as much. I did 
have to upgrade to -current once I went to a board with higher throughput, 
as -stable would eventually deadlock each pool.

>
> On the two systems above (amd64 with 16GB of RAM and 24 1TB disks) I get 
> about 30 days of uptime before the system hangs with a ZFS error.  They write 
> backups to disk 24x7 and never stop. I could not anything near that level of 
> stability with back03 (below) which was much older hardware maxed out at 4GB 
> of RAM.  I finally resolved the stability issues on back03 by ditching ZFS 
> and using geom_stripe across the two hardware RAID arrays.

Were you doing a zfs mirror across two hardware raid arrays? The 
performance of that type of setup would probably be sub-optimal versus a 
zpool with two raidz volumes.

> Yesterday I did a cvsup to 8-HEAD and built a new kernel and world. I 
> installed the new kernel, and then paniced slightly when I booted off the new 
> kernel and the ZFS utilities proved completely worthless in attempts to get 
> /usr and /var mounted (which are both on ZFS). It took a quick Google search 
> to remember the solution:

*cough* ABI compatibility isn't always preserved across releases. The best 
way to go from 7 to 8 is usually to perform the buildworld and 
buildkernel, drop into single user mode and install them both, then 
reboot. However, you're likely to run into problems that would require to 
to export/import your pools.

> After installing world and rebooting, the system is positively snappy. File 
> system interaction, which is lethargic on every ZFS system I've installed 
> seems to be much faster. I haven't benchmarked the IO performance but 
> something definitely changed. It's almost like the latency has decreased. 
> Would changes committed since mid-August (when I built my last ZFS servers 
> from -HEAD + the patch) and now explain this?
>
> If so, then I really should be upgrading my production ZFS servers to the 
> latest -HEAD.
>
> Matt
>
> PS: I am using compression and getting the following results:
>
> [root@storage] ~ # zfs get compressratio
> NAME                 PROPERTY       VALUE                SOURCE
> tank                 compressratio  1.12x                -
> tank/usr             compressratio  1.12x                -
> tank/usr/.snapshots  compressratio  2.09x                -
> tank/var             compressratio  2.13x                -
>
> In retrospect, I wouldn't bother with compression on /usr. But, 
> /usr/.snapshots is my rsnapshot based backups of my servers sitting in remote 
> data centers. Since the majority of changes between snapshots is log files, 
> the data is quite compressible and ZFS compressions is quite effective. It's 
> also quite effective on /var, as is shown. ZFS compression is effectively 
> getting me 1/3 more disk space off my 1.5TB 
> disks._______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.0812181732440.14585>