Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 09 Apr 2007 10:41:58 -0500
From:      Eric Anderson <anderson@freebsd.org>
To:        Gergely CZUCZY <phoemix@harmless.hu>
Cc:        Pawel Jakub Dawidek <pjd@freebsd.org>, freebsd-geom@freebsd.org
Subject:   Re: volume management
Message-ID:  <461A5EC6.8010000@freebsd.org>
In-Reply-To: <20070409153203.GA88082@harmless.hu>
References:  <20070408140215.GA54201@harmless.hu> <86k5wmu420.fsf@dwp.des.no>	<20070408181916.GA59715@harmless.hu>	<86bqhyu225.fsf@dwp.des.no> <461A4D93.3010200@freebsd.org>	<20070409143818.GA86722@harmless.hu>	<20070409152401.GG76673@garage.freebsd.pl> <20070409153203.GA88082@harmless.hu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 04/09/07 10:32, Gergely CZUCZY wrote:
> On Mon, Apr 09, 2007 at 05:24:01PM +0200, Pawel Jakub Dawidek wrote:
>> On Mon, Apr 09, 2007 at 04:38:18PM +0200, Gergely CZUCZY wrote:
>>> On Mon, Apr 09, 2007 at 09:28:35AM -0500, Eric Anderson wrote:
>>>> On 04/08/07 13:57, Dag-Erling Sm??rgrav wrote:
>>>>> Gergely CZUCZY <phoemix@harmless.hu> writes:
>>>>>> yeap, i know about ZFS, as i assume, it will need around 1.5-2 years
>>>>> >from now, when 7.0-RELEASE will be ready.
>>>>> No, it's expected this fall.
>>>>>> and i'm looking for a solution for a production environment within
>>>>>> a year.
>>>>> There is no other solution.
>>>> How about gconcat?  You could create a mirror, then gconcat another mirror, etc, extending the GEOM. 
>>>>  Then run growfs on that extended volume.  Wouldn't that work?
>>> why gmirror? gconcat somehow could be used for this,
>>> but
>>> 1) i see no attach operation for gconcat to add
>>> providers on the fly.
>>> 2) this would require to always create subpartitions/bsdlabels
>>> on the disk, and add a bit more on need.
>> Slow down:) Implementing off-line 'attach' operation is trivial and
>> on-line 'attach' operation is also easy, but because you need to unmount
>> file system anyway, off-line attach is ok.
>>
>> Let's assume you have currently two disks: da0 and da1.
>>
>> 	# gconcat label foo da0 da1
>> 	# newfs /dev/concat/foo
>> 	# mount /dev/concat/foo /foo
>>
>> and you want to extend your storage by adding two disks: da2 and da3:
>>
>> 	# umount /foo
>> 	# gconcat stop foo
>> 	# gconcat label foo da0 da1 da2 da3
>> 	# growfs /dev/concat/foo
>> 	# mount /dev/concat/foo /foo
>>
>> That's all.
>>
>> You can operate on mirrors too:
>>
>> 	# gmirror label foo0 da0 da1
>> 	# gconcat label foo mirror/foo0
>> 	# newfs /dev/concat/foo
>> 	# mount /dev/concat/foo /foo
>>
>> And extending:
>>
>> 	# gmirror label foo1 da2 da3
>> 	# umount /foo
>> 	# gconcat stop foo
>> 	# gconcat label foo mirror/foo0 mirror/foo1
>> 	# growfs /dev/concat/foo
>> 	# mount /dev/concat/foo /foo
> yes, this was the trivial part, but:
> 
> 1) to increment them, i need a device(disk/slice/label/etc).
> if i increment a lot, i need a lot of devices.
> 2) these incrementum-devices (the ones i increment by),
> have to be made, each of the has to be chopped from the
> storage pool.
> 
> please also look at the bsdlabel issue i have mentioned.
> gconcating is the most easy part of that. recursively
> bsdlabeling is what i have mostly referred to as the
> real issue. i really don't think this is the way to
> do it...
> 
> if you are down to the bits: we are running our systems
> on 3ware cards. the end of the disk (usually total-20G) is
> the storage pool. under linux's LVM2 we use this as a pool
> to allocate space for our services. At the startup only
> a minimal part of the pool is used, and as a service needs
> more space, we enlarge its available space, by little increments.
> so, we are not adding new disks, or anything, as you have assumed
> in your upper examples. we just give it a bit more space, nothing
> special.
> 
> new disks are not being added, that's why i had said "storage pool",
> to reflect this situation. it wasn't just a term for an abstraction
> level :)


I really think gvirstor is a good fit for you.  Search this list for 
some info on it, or just play with it a bit.  The author is active on 
this list and so he'll probably pipe up.

Eric



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?461A5EC6.8010000>