Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Sep 2003 13:09:03 -0700
From:      Brooks Davis <brooks@one-eyed-alien.net>
To:        Max Clark <max.clark@media.net>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: FW: 20TB Storage System
Message-ID:  <20030902200903.GA31697@Odin.AC.HMC.Edu>
In-Reply-To: <ILENIMHFIPIBHJLCDEHKIEMEDCAA.max.clark@media.net>
References:  <ILENIMHFIPIBHJLCDEHKIEMEDCAA.max.clark@media.net>

next in thread | previous in thread | raw e-mail | index | archive | help

--MGYHOYXEY6WxJCY8
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

[This isn't really a performance issue so I trimmed it.]

On Tue, Sep 02, 2003 at 12:48:29PM -0700, Max Clark wrote:
> I need to attach 20TB of storage to a network (as low cost as possible), I
> need to sustain 250Mbit/s or 30MByte/s of sustained IO from the storage to
> the disk.
>=20
> I have found external Fibre Channel -> ATA 133 Raid enclosures. These
> enclosures will house 16 drives so with 250GB drives a total of 3.5TB each
> after a RAID 5 format. These enclosures have advertised sustained IO of
> 90-100MByte/s each.
>=20
> One solution we are thinking about is to use a Intel XEON server with 3x =
FC
> HBA controller cards in the server each attached to a separate storage
> enclosure. In any event we would be required to use ccd or vinum to stripe
> multiple storage enclosures together to form one logical volume.
>=20
> I can partition this system into two separate 10TB storage pools.
>=20
> Given the above:
> 1) What would my expected IO be using vinum to stripe the storage enclosu=
res
> detailed above?
> 2) What is the maximum size of a filesystem that I can present to the host
> OS using vinum/ccd? Am I limited anywhere that I am not aware of?

Paul Saab recently demonstated a 2.7TB ccd so you shouldn't hit any
major limits there (I'm not sure where the next barrier is, but it
should be a ways off).  I'm not sure about UFS.

> 3) Could I put all 20TB on one system, or will I need two to sustain the =
IO
> required?

In theory you should be able to do 250Mbps on a single system, but I'm
not sure how well you will do in practice.  You'll need to make sure you
have sufficent PCI bus bandwidth.

> 4) If you were building this system how would you do it? (The installed $=
/GB
> must be below $5.00 dollars).

If you are willing to accept the management overhead of multiple
volumes, you will have a hard time beating 5U 24-disk boxes with 3
8-port 3ware arrays of 300GB disks.  That gets you 6TB per box (due to
controler limitations restricting you to 2TB per controler) for a bit
under $15000 or $2.5/GB.  The raw read speed of the arrays is around
85MBps so each array easily meets your throughput requirements.  Since
you'd have 20 arrays in 4 machines, you'd easily get meet your bandwith
requirements.  If you can't accept multiple volumes, you may still be
able to use a configuration like this using either target mode drivers
or the disk over network GEOM module that was posted recently.

You will need to use 5.x to make this work.

-- Brooks

--MGYHOYXEY6WxJCY8
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQE/VPjeXY6L6fI4GtQRAocDAKCIDhiRUX0AhL28TOc+6Nz6zjPVGACg0vq+
yCwBAxFXRmkBob+wvQUUS+M=
=BE8e
-----END PGP SIGNATURE-----

--MGYHOYXEY6WxJCY8--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030902200903.GA31697>