Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Mar 2006 11:58:07 -0600
From:      "Jaime Bozza" <jbozza@qlinksmedia.com>
To:        <freebsd-stable@freebsd.org>
Cc:        freebsd-stable@mlists.thewrittenword.com
Subject:   RE: well-supported SATA RAID card?
Message-ID:  <E5797C35DEFA014A96C2171380F0EEE4E97877@bacchus.ThinkBurstMedia.local>

next in thread | raw e-mail | index | archive | help
>>>*Rebuild times?

>>Can't give you an exact since it's been a while since I tested the
>>original rebuild, but we've migrated the RAID set (and volume) twice
>>since getting the system and the migrations happened within hours.  I
>>was able to expand the RAID Set (adding drives) and expand the
>>corresponding volume set to fill the drives all while the system was
>>running without a hitch.

>So you increased the size of a file-system on-the-fly?

Not a file-system but a volume.  I'm partitioning the volume into 800GB
chunks for this particular situation.  We just did it for the last time,
so I have some numbers.

Previous Configuration:
  11 WD4000YR 400GB drives
  RAID 6
  3600GB volume
  4 800GB partitions (using gpt)
  Remaining 400GB unused

Added: 5 WD4000YR 400GB drives
Time to Expand RAID set: 12 hours
Time to Expand Volume: 56 minutes

New Volume:
  RAID 6
  5600GB
  7 800GB partitions

During the RAID Set Expansion, the Areca fills out the Volume from the
11 drives to the 16 drives, so it's a lot of writing.  It basically
rewrote all 3600GB of existing data, which accounts for the 12 hours.
Expanding the Volume "initializes" the extra space and once it's done
FreeBSD sees the "new" larger volume.  Areca doesn't touch the first
part of the volume when expanding it, so existing data isn't destroyed.
Of course, if you modified a volume set to make it smaller, you're
mostly out of luck.

I didn't have to reboot during any of this process.  The most I had to
do was unmount the 4 existing volumes so that I had write access to the
volume (gpt doesn't allow write access when partitions are mounted),
then run gpt recover to recover the secondary partition table at the end
of the volume.

After that, it was just a simple matter of adding the 3 new partitions
and mounting them.

The above "Time to Expand Volume" was actually generating RAID 6 parity
data for the additional 2 terabytes, so that should give a good idea on
the speed of the XOR engine.  This was at the maximum of 80% utilization
for the background process.  I suspect it would have been a little
quicker if I restarted and used the BIOS menu to expand (since it would
have been a foreground process), but it's nice to be able to keep the
system in used while I was running the processes.

Jaime Bozza




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E5797C35DEFA014A96C2171380F0EEE4E97877>