From owner-freebsd-stable@FreeBSD.ORG Wed Oct 27 05:33:45 2004 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 06E5416A4CE for ; Wed, 27 Oct 2004 05:33:45 +0000 (GMT) Received: from mta11.adelphia.net (mta11.adelphia.net [68.168.78.205]) by mx1.FreeBSD.org (Postfix) with ESMTP id 98F2343D39 for ; Wed, 27 Oct 2004 05:33:44 +0000 (GMT) (envelope-from security@jim-liesl.org) Received: from smtp.jim-liesl.org ([68.71.52.28]) by mta11.adelphia.net (InterMail vM.6.01.03.02 201-2131-111-104-20040324) with ESMTP id <20041027053344.WEGO2188.mta11.adelphia.net@smtp.jim-liesl.org>; Wed, 27 Oct 2004 01:33:44 -0400 Received: from [192.168.1.101] (unknown [192.168.1.101]) by smtp.jim-liesl.org (Postfix) with ESMTP id 4E287152B4; Tue, 26 Oct 2004 23:33:43 -0600 (MDT) From: secmgr To: miha@ghuug.org In-Reply-To: <200410270027.47092.miha@ghuug.org> References: <200410270027.47092.miha@ghuug.org> Content-Type: text/plain Organization: Message-Id: <1098855222.15580.81.camel@emperor> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.2.2 (1.2.2-5) Date: 26 Oct 2004 23:33:43 -0600 Content-Transfer-Encoding: 7bit cc: freebsd-stable@freebsd.org Subject: Re: question on vinum X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Oct 2004 05:33:45 -0000 On Tue, 2004-10-26 at 18:27, Mikhail P. wrote: > I haven't worked with Vinum previously, but hear a lot about it. My question > is how to implement the above (unite four drives into single volume) using > Vinum, and what will happen if let's say one drive fails in volume? Am I > loosing the whole data, or I can just unplug the drive, tell vinum to use > remaining drives with the data each drive holds? I'm not looking for fault > tolerance solution. Since you don't care about fault tolerance, you probably want to do striping, also known as raid0. > From my understanding, in above scenario, Vinum will first fill up the first > drive, then second, etc. thats called concatenation, which is different than striping. striping balances the load across all the spindles > I have read the handbook articles, and I got general understanding of Vinum. > I'm particularly interested to know if I will still be able to use volume in > case of failed drive. If you want to do that, then you want raid5. If either a concat or stripe set looses a drive, the data will need to be restored. > Some minimal configuration examples would be greatly appreciated! Read the following. Really! http://www.vinumvm.org/vinum/vinum.ps http://www.vinumvm.org/cfbsd/vinum.txt Both of these have examples and will clear up your confusion about concat vs stripe vs raid5. concat is the easiest to add to, stripe has the best performance, raid5 trades write speed and n+1 drives for resilience. raid10 gets back the performance at the cost of 2*n drives Broken down: volume - top level. what the filesystem talks to. mirroring is defined at the volume level as is raid 10 (mirrored stripe plexes) plex - a virtual storage area made up of 1 or more subdisks for concat 2 or more for stripe, or 3 or more subdisks for raid 5. subdisk - area delegated from a bsd partition drive - the actual bsd partition (as in /dev/da1s1h) generally, the order is as follows: -fdisk the drives to be used so they have at least one bsd slice each. -use disklabel to edit the slice label so you have at least one partition of type vinum (that isn't the C partition) -in an editor, create the configuration drives volume plex sd when you define the subdisk, don't use the whole drive. Leave at least 64 blocks unused. use the file you created as input to vinum vinum create -v -f config Or you can cheat and just say, "vinum stripe -n volname /dev/ad0s1h /dev/ad1s1h /dev/ad2s1h /dev/ad3s1h" (should be all on one line) Raid5 plexs have to be init'ed newfs -v /dev/vinum/volname mount /dev/vinum/volname /mnt hopefully I haven't made you're understanding worse