Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 17 Apr 2008 21:01:01 +0100
From:      Pete French <petefrench@ticketswitch.com>
To:        zbeeble@gmail.com
Cc:        stable@freebsd.org
Subject:   Re: Dreadful gmirror performance, though each half works fine
Message-ID:  <E1JmaI1-000GTm-He@dilbert.ticketswitch.com>
In-Reply-To: <5f67a8c40804171155o72b2ab1ctbc116510c39025f3@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> In the end we found that ggate was crashy after a week or two of heavy use,
> too... dispite it's performance problems (which can be somewhat fixed by
> telling gmirror to only read from the local disk)

That last part interests me - how did you manage to make it do that ?
I read the man page, and the 'prefer' balancing algorithm should
let you tell it which disc to read from - but there is no mway to
change the priority on a disc in amirror that I can see. It can only
be set when inserting new drives. The ddefault is '0' and hence it's
nbot possible to attach a new drive with a priority below that of
the existing local drive. I tried using '-1' as a priority to fix
this, but it came up as 255.

> Certainly ZFS needs lots of memory --- most of my systems running ZFS have
> 4G of RAM and are running in 64 bit mode.  With the wiki's recomendation of
> a large number of kernel pages, I havn't had a problem with crashing.  I am
> using ZFS RAIDZ as a large data store and ZFS mirroring (separately) on my
> workstation as /usr, /var, and home directories.

All out machine ateb64 bit with between 4 and 16 gig of RAm too, so I could
try that. So you trust it then ? I;d be interested to know exactly which
options from the wiki page you ended up using for both kernel pages and ZFS
itself. That would be my ideal solution if it is stable enough.

> efficient.  Removing the read load from the ggated drive seems to help quite
> a bit in overall performance.  But even with this change, I still found that
> ggate would crash after several days to a week of heavy use.

Well, I upped the networking buffers and queue sizes to what I woulkd
normally consider 'stupid' values, and now it seems to have settled down
and is performing well (am using then 'load' balancing algorithm). Shall
see if it stays that way for the next few weeks given what you have just
said. I should probably try ZFS on it too, just for my own curiosity.

cheers,

-pete.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1JmaI1-000GTm-He>