Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 19 May 2013 16:28:06 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Dennis Glatting <freebsd@pki2.com>
Cc:        Tijl Coosemans <tijl@coosemans.org>, freebsd-questions@freebsd.org
Subject:   Re: More than 32 CPUs under 8.4-P
Message-ID:  <B06924FB-141E-421B-96E0-CEFE37C277A5@kraus-haus.org>
In-Reply-To: <1368978686.16472.25.camel@btw.pki2.com>
References:  <1368897188.16472.19.camel@btw.pki2.com> <51989FDA.5070302@coosemans.org> <1368978686.16472.25.camel@btw.pki2.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On May 19, 2013, at 11:51 AM, Dennis Glatting <freebsd@pki2.com> wrote:

> ZFS hangs on multi-socket systems (Tyan, Supermicro) under 9.1. ZFS =
does
> not hang under 8.4. This (and one other 4 socket) is a production
> system.

	Can you be more specific, I have been running 9.0 and 9.1 =
systems with multi-CPU and all ZFS with no (CPU related*) issues.

* I say no CPU related issues because I have run into SATA timeout =
issues with an external SATA enclosure with 4 drives (I know, SATA port =
expanders are evil, but it is my best option here). Sometimes the zpool =
hangs hard, sometimes just becomes unresponsive for a while. My "fix", =
such as it is, is to tune the zfs per vdev queue depth as follows:

vfs.zfs.vdev.min_pending=3D"3"
vfs.zfs.vdev.max_pending=3D"5"

The defaults are 5 and 10 respectively, and when I run with those I have =
the timeout issues, but only under very heavy I/O load. I only generate =
such load when migrating large amounts of data, which thankfully does =
not happen all that often.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B06924FB-141E-421B-96E0-CEFE37C277A5>