From owner-freebsd-current@FreeBSD.ORG Thu Dec 2 17:41:58 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 816D916A4CF for ; Thu, 2 Dec 2004 17:41:58 +0000 (GMT) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.FreeBSD.org (Postfix) with ESMTP id 79E1843D1D for ; Thu, 2 Dec 2004 17:41:57 +0000 (GMT) (envelope-from andre@freebsd.org) Received: (qmail 29035 invoked from network); 2 Dec 2004 17:32:57 -0000 Received: from dotat.atdotat.at (HELO [62.48.0.47]) ([62.48.0.47]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 2 Dec 2004 17:32:57 -0000 Message-ID: <41AF53E1.80408@freebsd.org> Date: Thu, 02 Dec 2004 18:41:53 +0100 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8a5) Gecko/20041122 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Sam References: <41AE3F80.1000506@freebsd.org> <41AF29AC.6030401@freebsd.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: hackers@freebsd.org cc: Scott Long cc: "current@freebsd.org" Subject: Re: My project wish-list for the next 12 months X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Dec 2004 17:41:58 -0000 Sam wrote: > On Thu, 2 Dec 2004, Andre Oppermann wrote: > >> Scott Long wrote: >> >>> 5. Clustered FS support. SANs are all the rage these days, and >>> clustered filesystems that allow data to be distributed across many >>> storage enpoints and accessed concurrently through the SAN are very >>> powerful. RedHat recently bought Sistina and re-opened the GFS source >>> code, so exploring this would be very interesting. >> >> There are certain steps that can be be taken one at a time. For example >> it should be relatively easy to mount snapshots (ro) from more than one >> machine. Next step would be to mount a full 'rw' filesystem as 'ro' on >> other boxes. This would require cache and sector invalidation >> broadcasting >> from the 'rw' box to the 'ro' mounts. The holy grail of course is to >> mount >> the same filesystem 'rw' on more than one box, preferrably more than two. >> This requires some more involved synchronization and locking on top of >> the >> cache invalidation. And make sure that the multi-'rw' cluster stays >> alive >> if one of the participants freezes and doesn't respond anymore. >> >> Scrolling through the UFS/FFS code I think the first one is 2-3 days of >> work. The second 2-4 weeks and the third 2-3 month to get it right. >> If someone would throw up the money... > > You might also design in consideration for data redundancy. Right now > GFS largely relies on the SAN box to export already redundant RAID > disks. GFS sits on a "cluster aware" lvm layer that is supposed to > be able to do mirroring and striping, but I'm told it's not > stable enough for "production" use. Data redundancy would require a UFS/FFS redesign. I'm 'only' talking about enhancing UFS/FFS but keeping anything ondisk the same (plus some more elements). -- Andre