From owner-freebsd-scsi@FreeBSD.ORG Mon Jan 16 05:32:50 2012 Return-Path: Delivered-To: freebsd-scsi@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6A6AC106564A; Mon, 16 Jan 2012 05:32:50 +0000 (UTC) (envelope-from Kashyap.Desai@lsi.com) Received: from na3sys009aog112.obsmtp.com (na3sys009aog112.obsmtp.com [74.125.149.207]) by mx1.freebsd.org (Postfix) with ESMTP id BA7168FC0A; Mon, 16 Jan 2012 05:32:49 +0000 (UTC) Received: from paledge01.lsi.com ([192.19.193.42]) (using TLSv1) by na3sys009aob112.postini.com ([74.125.148.12]) with SMTP ID DSNKTxO2gfk0odMSaXVBtjHJnuG38peBqoK4@postini.com; Sun, 15 Jan 2012 21:32:49 PST Received: from PALHUB01.lsi.com (128.94.213.114) by PALEDGE01.lsi.com (192.19.193.42) with Microsoft SMTP Server (TLS) id 8.3.137.0; Mon, 16 Jan 2012 00:25:57 -0500 Received: from inbexch01.lsi.com (135.36.98.37) by PALHUB01.lsi.com (128.94.213.114) with Microsoft SMTP Server (TLS) id 8.3.106.1; Mon, 16 Jan 2012 00:21:55 -0500 Received: from inbmail01.lsi.com ([135.36.98.64]) by inbexch01.lsi.com ([135.36.98.37]) with mapi; Mon, 16 Jan 2012 10:51:52 +0530 From: "Desai, Kashyap" To: "Kenneth D. Merry" , John Date: Mon, 16 Jan 2012 10:51:42 +0530 Thread-Topic: mps driver chain_alloc_fail / performance ? Thread-Index: AczTFsTZz77NBltcTZiZx+nRPz9abAA93exQ Message-ID: References: <20120114051618.GA41288@FreeBSD.org> <20120114232245.GA57880@nargothrond.kdm.org> In-Reply-To: <20120114232245.GA57880@nargothrond.kdm.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "freebsd-scsi@freebsd.org" Subject: RE: mps driver chain_alloc_fail / performance ? X-BeenThere: freebsd-scsi@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SCSI subsystem List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2012 05:32:50 -0000 Which driver version is this ? In our 09.00.00.00 Driver (which is in pipel= ine to be committed) has 2048 chain buffer counter. And our Test team has verified it with almost 150+ Drives. As suggested by Ken, Can you try increasing MPS_CHAIN_FRAMES to 4096 OR 20= 48 ~ Kashyap > -----Original Message----- > From: owner-freebsd-scsi@freebsd.org [mailto:owner-freebsd- > scsi@freebsd.org] On Behalf Of Kenneth D. Merry > Sent: Sunday, January 15, 2012 4:53 AM > To: John > Cc: freebsd-scsi@freebsd.org > Subject: Re: mps driver chain_alloc_fail / performance ? >=20 > On Sat, Jan 14, 2012 at 05:16:18 +0000, John wrote: > > Hi Folks, > > > > I've started poking through the source for this, but thought I'd > > go ahead and post to ask other's their opinion. > > > > I have a system with 3 LSI SAS hba cards installed: > > > > mps0: port 0x5000-0x50ff mem 0xf5ff0000- > 0xf5ff3fff,0xf5f80000-0xf5fbffff irq 30 at device 0.0 on pci13 > > mps0: Firmware: 05.00.13.00 > > mps0: IOCCapabilities: > 285c > > mps1: port 0x7000-0x70ff mem 0xfbef0000- > 0xfbef3fff,0xfbe80000-0xfbebffff irq 48 at device 0.0 on pci33 > > mps1: Firmware: 07.00.00.00 > > mps1: IOCCapabilities: > 1285c c> > > mps2: port 0x6000-0x60ff mem 0xfbcf0000- > 0xfbcf3fff,0xfbc80000-0xfbcbffff irq 56 at device 0.0 on pci27 > > mps2: Firmware: 07.00.00.00 > > mps2: IOCCapabilities: > 1285c c> >=20 > The firmware on those boards is a little old. You might consider > upgrading. >=20 > > Basically, one for internal and two for external drives, for a > total > > of about 200 drives, ie: > > > > # camcontrol inquiry da10 > > pass21: Fixed Direct Access SCSI-5 device > > pass21: Serial Number 6XR14KYV0000B148LDKM > > pass21: 600.000MB/s transfers, Command Queueing Enabled >=20 > That's a lot of drives! I've only run up to 60 drives. >=20 > > When running the system under load, I see the following reported: > > > > hw.mps.0.allow_multiple_tm_cmds: 0 > > hw.mps.0.io_cmds_active: 0 > > hw.mps.0.io_cmds_highwater: 772 > > hw.mps.0.chain_free: 2048 > > hw.mps.0.chain_free_lowwater: 1832 > > hw.mps.0.chain_alloc_fail: 0 <--- Ok > > > > hw.mps.1.allow_multiple_tm_cmds: 0 > > hw.mps.1.io_cmds_active: 0 > > hw.mps.1.io_cmds_highwater: 1019 > > hw.mps.1.chain_free: 2048 > > hw.mps.1.chain_free_lowwater: 0 > > hw.mps.1.chain_alloc_fail: 14369 <---- ?? > > > > hw.mps.2.allow_multiple_tm_cmds: 0 > > hw.mps.2.io_cmds_active: 0 > > hw.mps.2.io_cmds_highwater: 1019 > > hw.mps.2.chain_free: 2048 > > hw.mps.2.chain_free_lowwater: 0 > > hw.mps.2.chain_alloc_fail: 13307 <---- ?? > > > > So finally my question (sorry, I'm long winded): What is the > > correct way to increase the number of elements in sc->chain_list > > so mps_alloc_chain() won't run out? >=20 > Bump MPS_CHAIN_FRAMES to something larger. You can try 4096 and see > what > happens. >=20 > > static __inline struct mps_chain * > > mps_alloc_chain(struct mps_softc *sc) > > { > > struct mps_chain *chain; > > > > if ((chain =3D TAILQ_FIRST(&sc->chain_list)) !=3D NULL) { > > TAILQ_REMOVE(&sc->chain_list, chain, chain_link); > > sc->chain_free--; > > if (sc->chain_free < sc->chain_free_lowwater) > > sc->chain_free_lowwater =3D sc->chain_free; > > } else > > sc->chain_alloc_fail++; > > return (chain); > > } > > > > A few layers up, it seems like it would be nice if the buffer > > exhaustion was reported outside of debug being enabled... at least > > maybe the first time. >=20 > It used to report being out of chain frames every time it happened, > which > wound up being too much. You're right, doing it once might be good. >=20 > > It looks like changing the related #define is the only way. >=20 > Yes, that is currently the only way. Yours is by far the largest setup > I've seen so far. I've run the driver with 60 drives attached. >=20 > > Does anyone have any experience with tuning this driver for high > > throughput/large disk arrays? The shelves are all dual pathed, and > with > > the new gmultipath active/active support, I've still only been able to > > achieve about 500MBytes per second across the controllers/drives. >=20 > Once you bump up the number of chain frames to the point where you > aren't > running out, I doubt the driver will be the big bottleneck. It'll > probably > be other things higher up the stack. >=20 > > ps: I currently have a ccd on top of these drives which seems to > > perform more consistenty then zfs. But that's an email for a different > > day :-) >=20 > What sort of ZFS topology did you try? >=20 > I know for raidz2, and perhaps for raidz, ZFS is faster if your number > of > data disks is a power of 2. >=20 > If you want raidz2 protection, try creating arrays in groups of 10, so > you > wind up having 8 data disks. >=20 > Ken > -- > Kenneth Merry > ken@FreeBSD.ORG > _______________________________________________ > freebsd-scsi@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-scsi > To unsubscribe, send any mail to "freebsd-scsi-unsubscribe@freebsd.org"