Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 May 2009 14:37:53 -0700
From:      Alfred Perlstein <alfred@freebsd.org>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: A very big Thank You for the inclusion of ZFS
Message-ID:  <20090528213753.GZ67847@elvis.mu.org>
In-Reply-To: <b269bc570905261235i16bbba1bs2f105f6d2c87f5c6@mail.gmail.com>
References:  <b269bc570905261235i16bbba1bs2f105f6d2c87f5c6@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
I'm in no way responsible for ZFS, but I wanted to let you know
that emails like this are very very awesome to get.  Thank you
for the kind words, it does make FreeBSD development worthwhile
when someone takes the time to send in kind words.

-Alfred

* Freddie Cash <fjwcash@gmail.com> [090526 12:35] wrote:
> I just wanted to send out a very big THANK YOU to all those who have
> had a hand in bringing ZFS to FreeBSD.  You've done a wonderful job.
> 
> With the release of FreeBSD 7.2, things have improved to the point
> where I can't crash our storage servers anymore (and I've tried all
> the things that would crash 7.0 and 7.1).  Bravo!
> 
> What impressed me even more, though, was just how performant a
> multiple raidz2 pool could be.  During a normal backup run (rsync of
> 105 servers each night), we graph sustained reads of 80 MBytes/sec and
> writes of 50 MBytes/sec (via snmpd).  Nothing too spectacular, but
> still quite nice.  Didn't realise just how much of a bottleneck the
> remote network connections are, though.
> 
> Doing a local iozone benchmark, using a command-line someone posted
> online as known to crash ZFS on FreeBSD 7.0, I was able to get just
> under 350 MBytes/sec sustained write throughput (as shown by snmpd)
> with over 15 MBytes/sec per drive (as shown by gstat).  Fiddling with
> the iozone options, I was able to push that to over 400 MBytes/sec
> sustained write with just shy of 20 MBytes/sec per drive.  And CPU
> utilisation never went above 40% per core.  System never crashed,
> hung, locked up, of even seemed slow while connected via SSH.
> 
> While those numbers may not seem all that high to some people, for us,
> those are amazing!!  :)  (We've never used SCSI, or RAID0, or RAID10,
> or FibreChannel, or any of the other fancy storage stuff that gives
> uber-high stats.)  This gives us hope for just how many remote sites
> we'll be able to backup to these storage servers (ie still lots of
> headroom on the storage side, just need to boost the network side of
> things).
> 
> For the curious, the hardware is:
>   Tyan h2000M motherboard
>   2x dual-core AMD Opteron 2220 CPUs at 2.8 GHz
>   8 GB ECC DDR2-667 SDRAM
>   3Ware 9650SE-12ML PCIe RAID controller
>   3Ware 9550SXU-12ML PCI-X RAID controller (64-bit/133 MHz slot)
>   24x 500 GB WD SATA2 harddrives (12 per controller, configured as
> Single Drives)
>   4-port Intel Pro/1000MT PCIe NIC
> 
> The software is:
>   64-bit FreeBSD 7.2-RELEASE
>   no kmem tuning
>   ZFS ARC limited to 1 GB via /boot/loader.conf
>   test filesystem has no compression and no atime set
> 
> Pool configuration:
>   3 raidz2 vdevs of 8 drives each (1 vdev uses 4-drives from each RAID
> controller, the other 2 vdevs use 8 drives from 1 controller)
> 
> iozone commands:
>   iozone -M -e -+u -T -t 128 -S 4096 -L 64 -r 4k -s 40g -i 0 -i 1 -i 2
> -i 8 -+p 70 -C  (350 MBytes/sec writes)
>   iozone -M -e -+u -T -t 128 -r 128k -s 4g -i 0 -i 1 -i 2 -i 8 -+p 70
> -C  (400 MBytes/sec write)
> 
> -- 
> Freddie Cash
> fjwcash@gmail.com
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"

-- 
- Alfred Perlstein



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090528213753.GZ67847>