Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 Sep 2001 23:30:25 +0200
From:      Bernd Walter <ticso@mail.cicely.de>
To:        sthaug@nethelp.no
Cc:        ticso@mail.cicely.de, john_wilson100@excite.com, freebsd-hackers@FreeBSD.ORG, freebsd-stable@FreeBSD.ORG
Subject:   Re: VINUM PANIC ON -STABLE
Message-ID:  <20010915233025.F17960@cicely20.cicely.de>
In-Reply-To: <48209.1000586148@verdi.nethelp.no>; from sthaug@nethelp.no on Sat, Sep 15, 2001 at 10:35:48PM %2B0200
References:  <20010915215206.E17960@cicely20.cicely.de> <48209.1000586148@verdi.nethelp.no>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Sep 15, 2001 at 10:35:48PM +0200, sthaug@nethelp.no wrote:
> > I saw two unsualy points in your config.
> > 1. why use more than one vinum partition per physical drive?
> >    vinum itself handles it very well - it's a volume manager.
> > 2. I don't think that a stripe size not matching n * page size is a
> >    good choice.
> 
> There are excellent reasons why you want a stripe size which is *not* a
> multiple of the page size: To distribute the inodes across all disks.

It's unusual to have less than 4(8)k inode area per cylinder group.
An inode is 128 bytes that means 4k gives you only 32 inodes.
At best you don't use the pagesize but the cluster size, which is
64k on FFS.
Breaking a single access into 2 or even more phycally access is
additional load and more wait time, as you need wait for 2 disks to
inform the filesystem a write back, which gives you bigger delays and
you also need to wait for both disks on a read.

What you usually realy want is to break cylinder groups so that their
management area gets distributed over the physicaly disks.
FFS itself does a good job to distribute data over the cylinder groups.

Here a random example:
ticso@cicely6# newfs -N /dev/da0b
Warning: 1408 sector(s) in last cylinder unallocated
/dev/da0b:      400000 sectors in 98 cylinders of 1 tracks, 4096 sectors
        195.3MB in 5 cyl groups (22 c/g, 44.00MB/g, 9728 i/g)
super-block backups (for fsck -b #) at:
 32, 90144, 180256, 270368, 360480

We have 9728 inodes/group, which means 1216k for each group.
That is much more than 4k.
If you have two disks and use 256k stripe size it even breaks the
inode area.
But we have 44M cg size, which is what we want to break.
With 2 disks 192k is a sensefull stripe size:
44 * 1024 / 2 / 192 = 117.33333333333333333333

-- 
B.Walter              COSMO-Project         http://www.cosmo-project.de
ticso@cicely.de         Usergroup           info@cosmo-project.de


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010915233025.F17960>