Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Oct 1996 09:51:54 +0200 (IST)
From:      Nadav Eiron <nadav@barcode.co.il>
To:        Fabio Cesar Gozzo <fabio@thomson.iqm.unicamp.br>
Cc:        questions@freebsd.org
Subject:   Re: Interleave size in CCD
Message-ID:  <Pine.BSF.3.91.961002093856.24687A-100000@gatekeeper.barcode.co.il>
In-Reply-To: <199610011800.PAA01299@thomson.iqm.unicamp.br>

next in thread | previous in thread | raw e-mail | index | archive | help


On Tue, 1 Oct 1996, Fabio Cesar Gozzo wrote:

> Hello everybody,
> =09=09I'm trying to concatenate 2 discs in my system (PPro,
> AHA 2940, 2 SCSI 2GB each). The concatenated disk ccd0 will be used
> for large (2GB) scratch files, i.e., intensive read/write process.
> =09My question is: what would be a good value for interleave ?
> =09Small values is good for read and bigger for write. But in this
> case, I have both process.
> =09Any hint would be much apreciated.
>=20
>=20
> =09=09=09=09=09Fabio Gozzo
> =09=09=09=09=09fabio@iqm.unicamp.br
>=20
>=20

Well, here is my hint:
I don't have any specific experience with ccd, but I've configured many=20
RAID systems (all sorts of hardware/software). The interleave (sometimes=20
referred to as the stripe size) in a RAID 0 array (striping) has nothing=20
to do with the balance of read/write operations. They are only related=20
when using parity, and then they are referred to as two separate=20
RAID=A0classes (RAID 3 vs. RAID 4/5), and even that's mostly irrelevant=20
now, as RAID controllers implement write-back caches.

The tradeoff with stripe size (and mostly with RAID 3 vs. 5 decision) is
whether you have a more-or-less single stream of requests to the disks, or
many users concurrently on it, and the size of requests they make. Small
stripe sizes work best if you have a single process generating relatively
long I/O requests (like most DBMSs do). Small stripe size makes the array
behave like a normal disk, but with double the transfer rate. If, on the
other hand, you have many users, doing relatively short and random I/O's
you'd be better off using large stripe size (larger than the longest
request that will be made to the array). This would have the effect of=20
halving (on average) the access time, but leaving the transfer rate (for=20
each individual request) the same as that of a single disk, though the=20
total throughput will, of course, be doubled anyway. Large stripe sizes,=20
when there are many users, provide the best throughput.

All in all, if you have one process accessing the disk, you'll probably=20
be better off with small stripe sizes, but there is always a tradeoff. If=
=20
you care about performance and don't know the application well enough -=20
try it both ways!


All this is said without any specific reference to the ccd driver. For=20
example, to make my theory correct it'll have to issue two concurrent=20
I/O's to member disks if each one is shorter than the stripe size and=20
they happen to belong to different member disks.

Hope this helps,
Nadav



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.91.961002093856.24687A-100000>