Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 5 Feb 2001 09:26:59 -0600
From:      "Michael C . Wu" <keichii@iteration.net>
To:        hackers@freebsd.org
Cc:        fs@freebsd.org
Subject:   Extremely large (70TB) File system/server planning
Message-ID:  <20010205092658.A97400@peorth.iteration.net>

next in thread | raw e-mail | index | archive | help
Hello Everyone,

While talking to a friend about what his company is planning to do,
I found out that he is planning a 70TB filesystem/servers/cluster/db.
(Yes, seventy t-e-r-a-b-y-t-e...)

Apparently, he has files that go up to 2gb each, and actually require
such a horribly sized cluster.

If he wanted a PC cluster, and having 5TB on each PC, he would have
350 machines to maintain.  From past experience maintaining clusters,
I guarantee that he will have at least 1 box failing every other day.
And I really do not think his idea of using NFS is that good. ;-)

Now if we were to go to the high-end route (and probably more cost
effective), we can pick SAN's, large Sun fileservers, or somesuch.
I still cannot picture him being able to maintain file integrity.

I say that he should attempt to split his filesystems into much
smaller chunks, say 1TB each.  And attempt some way of having a RAID5
array.  Mirroring or other RAID configurations would prove too costly.
What would you guys do in this case? :)
-- 
+------------------------------------------------------------------+
| keichii@peorth.iteration.net         | keichii@bsdconspiracy.net |
| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
+------------------------------------------------------------------+


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010205092658.A97400>