Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Apr 2005 11:44:40 -0700
From:      Benson Wong <tummytech@gmail.com>
To:        emartinez@crockettint.com
Cc:        freebsd-questions@freebsd.org
Subject:   Re: 5.8TB RAID5 SATA Array Questions
Message-ID:  <860807bf0504141144550c3072@mail.gmail.com>
In-Reply-To: <20050414104354.D30DC341FD@mxc1.crockettint.com>
References:  <20050414104354.D30DC341FD@mxc1.crockettint.com>

next in thread | previous in thread | raw e-mail | index | archive | help
I'm halfway through a project using about the same amount of storage,
5.6TB on an attach Apple XServe RAID. After everything I have about
4.4TB of usable space, 14 x 400GB HDDs in 2 RAID5 arrays.

> All,
>
> I have a project in which I have purchased the hardware to build a massiv=
e
> file server (specifically for video). The array from all estimates will c=
ome
> in at close to 5.8TB after overheard and formatting. Questions are:
>
> What Version of BSD (5.3, 5.4, 4.X)?
If all your hardware is compatible with 5.3-RELEASE use that. It is
quite stable. I had to upgrade through buildworld to 5.4-STABLE
because the onboard NIC didn't get recognize. Don't use 4.X since it
doesn't support UFS2. Also 4.X doesn't see partitions larger than 1TB.
I "sliced" up my XRAID so it shows 4 x 1.1TB arrays. This shows up
like this in 5.x:

/dev/da0c      1.1T     32M    996G     0%    /storage1
/dev/da2c      1.1T     27G    969G     3%    /storage3
/dev/da3c      1.1T    186M    996G     0%    /storage4
/dev/da1c      1.1T    156K    996G     0%    /storage2

These are NFS mounted, and in FBSD 4.9 they look like this:
server:/storage1               -965.4G    32M   996G     0%    /storage1
server:/storage2               -965.4G   156K   996G     0%    /storage2
server:/storage3               -965.4G    27G   969G     3%    /storage3
server:/storage4               -965.4G   186M   996G     0%    /storage4

I'm in the process of slowly migrating all the servers to 5.3.

Also UFS2 allows for lazy inode initialization. It won't go and
allocate all the inodes at one time, only when it needs more. This is
a large time saving because TB size partitions will likely have
hundreds of millions of inodes. Each one of my 1.1TB arrays has about
146M inodes!

>
> What should the stripe size be for the array for speed when laying down
> video streams?

This is more of a 3Ware RAID thing. Not sure, use a larger stripe size
because you're likely using larger files. For the FBSD block/fragment
size I stuck with the default 16K blocks 2K fragments even though
using 8K blocks and 1K frags would be more efficient for what I'm
using it for (Maildir storage). I did some benchmarks and 16K/2K
performed slightly better. Stick to the default.

>
> What filesystem?
UFS2.

>
> Is there any limitations that would prevent a single volume that large? (=
if
> I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.

>
> The idea is to provide as much network storage as possible as fast as
> possible, any particular service? (SMB. NFS, ETC)

I share it all over NFS. Haven't done extensive testing yet but NFS is
alright. I just made sure I have lots of NFS server processes and
tuned it a bit using nfsiod. Haven't tried SMB but SMB is usually
quite slow. I would recommend using whatever your client machines
support and tuning for that.

>
> Raid controller: 3Ware 9500S-12MI
I use a 9500S in my system as well. These are quite slow from the
benchmarks I've read.

--
This isn't one of your questions but I'm going to share this anyways.
After building this new massive email storage system I concluded that
FreeBSD large file system support is sub-par. I love FreeBSD and I'm
running it on pretty much every server but progress on large TB file
systems is not up to snuff yet. Likely because the developers do not
have access to large expensive disk arrays and equipment. Maybe the
FreeBSD foundation can throw some $$ towards this.

If you haven't already purchased the equipment I would recommend going
with an XServe + XRAID. Mostly because it will probably be a breeze to
set up and use. The price is a premium but for a couple of extra
grand, it is worth saving the headaches of configuration.

My network is predominantly FBSD so I choose FBSD for to keep things
more homogenous and have FBSD NFS talking to FBSD NFS. If I didn't
dislike Linux distros so much I would probably have used Linux and
it's fantastic selection of stable, modern file systems with
journaling support.

Another thing you will likely run into with FBSD is creating the
partitions. I didn't have much luck with sysinstall/fdisk to create
the large file systems. My arrays are mounted over Fibre channel so
you might have more luck. Basically I had to use disklabel and newfs
from the shell prompt. It worked, but took a few days of googling and
documentation scanning to figure it all out.

Hope that helps. Let me know if you need any more info.

Ben.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?860807bf0504141144550c3072>