Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 2 Apr 1995 00:25:43 -0800 (PST)
From:      "Rodney W. Grimes" <rgrimes@gndrsh.aac.dev.com>
To:        peter@bonkers.taronga.com (Peter da Silva)
Cc:        terry@cs.weber.edu, PVinci@ix.netcom.com, hackers@FreeBSD.org
Subject:   Re: large filesystems/multiple disks [RAID]
Message-ID:  <199504020825.AAA01169@gndrsh.aac.dev.com>
In-Reply-To: <199504020609.AAA23597@bonkers.taronga.com> from "Peter da Silva" at Apr 2, 95 00:09:45 am

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> > It's fragile because you could for instance have four file systems
> > with blocks in the same 16M area of a disk.
> 
> Um, why would you do that? Doesn't that sort of counter the whole reason
> for running file systems over multiple disks?

I would think so, the way Auspex handles this is that the blocking factor
can be tuned when the logical volume is created.  We found that for
striped volumes it was best to have this close to either a cyclinder
size or the size of the write behind buffer in the drive to maximize
data transfer rate.  ZBR drives kinda through out any attempt at
the cylinder size optimization so we most often use the cache size.

I have seen file system performance in excess of 8MB/sec using drives
that have raw transfer rates around 2.4MB/sec each.  (4 drive wide stripe,
256Kbyte blocking factor.)  The drive subsystem could have probably
performed better than this, but I had to access the drives over FDDI
as the local host to drive bandwidth in an Auspex is severly limited
by the way the local host gets to the disk controllers vs the way
that the network boards get to them (can you say dedicated NFS engine :-)).

The numbers scaled well as I changed the width of the stripe from 2
to 5 drives (above 4 there was no improvement, again attributed to
FDDI bandwidth limitations).  The performance actually decreased 
drastically any time the blocking factor was raised above 256K
(that also happens to match the size of the cache in the drives),
it dropped marginally when lowered below 256K until about 64K, then
fell off sharpely again.

I would really like to see the ability to do this type of drive
stripping in FreeBSD, with current CPU technology pushing 200+MB/sec,
memory systems rapidly approaching that speed (yes, the new EDO simm
technology should push main memory speeds up into the 200MB/sec range
by year end) disk drive bandwidth is going to become a big issue
once again.

-- 
Rod Grimes                                      rgrimes@gndrsh.aac.dev.com
Accurate Automation Company                   Custom computers for FreeBSD



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199504020825.AAA01169>