Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Aug 2009 13:23:41 -0700
From:      Matt Simerson <matt@corp.spry.com>
To:        freebsd-fs@freebsd.org
Cc:        "Hearn, Trevor" <trevor.hearn@Vanderbilt.Edu>
Subject:   Re: UFS Filesystem issues, and the loss of my hair...
Message-ID:  <04A4A2CB-828B-46BF-A2B6-50B64F06E96E@spry.com>
In-Reply-To: <8E9591D8BCB72D4C8DE0884D9A2932DC35BD34C3@ITS-HCWNEM03.ds.Vanderbilt.edu>
References:  <8E9591D8BCB72D4C8DE0884D9A2932DC35BD34C3@ITS-HCWNEM03.ds.Vanderbilt.edu>

next in thread | previous in thread | raw e-mail | index | archive | help

On Aug 6, 2009, at 6:51 AM, Hearn, Trevor wrote:

> First off, let me state that I love FreeBSD. I've used it for years,  
> and have not had any major problems with it... Until now.
>
> As you can tell, I work for a major university. I setup a large  
> storage array to hold data for a project they have here. No great  
> shakes, just some standard files and such.
> <snip>
> I'd buy a fella, or gal, a cup of coffee and a pop-tart if they  
> could help a brother out. I have checked out this link:
> http://phaq.phunsites.net/2007/07/01/ufs_dirbad-panic-with-mangled-entries-in-ufs/
> and decided that I need to give this a shot after hours, but being  
> the kinda guy I am, I need to make sure I am covering all of my bases.
>

> Anyone got any ideas?
>
> Thanks!

Have you given any consideration to ZFS?

With ZFS there's no reason to have all those slices. Just stripe the  
two RAID 6 arrays together and have a single 26TB zpool. No GPT or UFS  
to mess with. Just point ZFS at the raw disks and off you go. I'm  
doing that with Areca 1231ML controllers in boxes with 24 disks each.  
The two 12 channel RAID cards each present a RAID volume to the OS and  
zpool stripes them together.

One of the more useful features of ZFS is file system compression. You  
may find that with file system compression, you can get by with 13TB  
of storage.  Then you have one RAID 6 array as the data store and the  
2nd array for backups on each machine. With ZFS, you can send  
snapshots of the data partition to the backup every hour, or even  
every minute without any appreciable impact.

back01# zfs get compression back01/var
NAME        PROPERTY     VALUE       SOURCE
back01/var  compression  gzip        local

back01# zfs get compressratio back01/var
NAME        PROPERTY       VALUE       SOURCE
back01/var  compressratio  2.16x       -

I'm using gzip compression and I fit over twice as much data on the  
filesystem as I'd otherwise be getting. You can get more aggressive  
with gzip-9 if you need.

You could use your backup server as a proof-of-concept. Install  
FreeBSD 8-BETA2 amd64 on it. Unmount the existing GPT partitions, wipe  
the MBR clean using dd, and create a zpool on just one of the RAID 6  
volumes. Set ZFS compression=gzip on your filesystem and use rsync to  
copy all the files from your 'primary' server. I suspect you'll find  
that you have ample storage. Then you can create another zpool on that  
same box using the other RAID 6 volume for backups. You can experiment  
there with zfs send/receive, or rsnapshot, or whatever you use.

Then get a subset of your users to start testing on it and see how it  
fares. I suspect you'll be quite pleased. If it works out wonderfully,  
you can rebuild the other GPT/UFS system on ZFS as well. Set it up  
with both RAID 6 volumes in one ZFS pool and start pushing your  
backups from the primary server to it. Once successfully backed up,  
you can add the 2nd RAID 6 volume on the primary server into the  
storage pool to double it's disk space.

Matt



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?04A4A2CB-828B-46BF-A2B6-50B64F06E96E>