Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 10 Jun 2012 21:22:50 +0200
From:      Kees Jan Koster <kjkoster@gmail.com>
To:        freebsd-stable@freebsd.org
Subject:   Re: FreeBSD 9.0 hangs on heavy I/O
Message-ID:  <065831B2-A996-4400-968B-494B474784F6@gmail.com>
In-Reply-To: <20120530002415.GC92444@in-addr.com>
References:  <BD5D6BB6-8CFF-456A-B03E-05454EB03AB6@gmail.com> <20120529203913.GB92444@in-addr.com> <43F6FDD2-3D31-44D7-82C7-4466D609ECF2@gmail.com> <20120530002415.GC92444@in-addr.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Dear All,

Been a while since I worked on this and I thought I'd send out an =
update. I found out I had two related issues. Seemingly random hangs =
that seem to have their root in disk I/O and the other is that network =
connections are not being served quickly enough because of this.

For the latter issue, I learned that by raising kern.ipc.somaxconn I =
could make the system buffer the connections long enough so that the =
application could accept all of them.

The effect is that now my application runs smoothly again, although =
there are still lots of unexplained things about this system's I/O load.

Next steps for me are to move the code around a bit to change the way my =
application uses the disk. There is still some buffering I can do before =
writing and I can move a small part of the I/O off to another spindle. =
So while I am still not sure what is going on I will focus on my own =
code a bit before I return to tuning FreeBSD for this workload.

Thanks to all who contributed to this thread.

Kees Jan


On 30 May 2012, at 02:24, Gary Palmer wrote:

> On Tue, May 29, 2012 at 10:59:58PM +0200, Kees Jan Koster wrote:
>> Dear Gary,
>>=20
>>>> # camcontrol devlist
>>>> <WDC WD740ADFD-00NLR1 20.07P20>    at scbus1 target 0 lun 0 =
(pass0,ada0)
>>>> <WDC WD740GD-00FLC0 33.08F33>      at scbus2 target 0 lun 0 =
(pass1,ada1)
>>>> <WDC WD740GD-00FLC0 33.08F33>      at scbus3 target 0 lun 0 =
(pass2,ada2)
>>>> <OCZ SUMMIT VBM1801Q>              at scbus4 target 0 lun 0 =
(pass3,ada3)
>>>> <PepperC Virtual Disc 1 0.01>      at scbus7 target 0 lun 0 =
(pass4,cd0)
>>>> <PepperC Virtual Disc 2 0.01>      at scbus8 target 0 lun 0 =
(pass5,cd1)
>>>=20
>>> Check the SSD for its internal block size and make sure your =
filesystem
>>> and partitions are aligned with the disk block size.  Unless there
>>> is something wrong with your SATA controller I'd expect a lot more =
than
>>> 273 IOPS/sec and ~30MByte/sec from a SSD.
>>=20
>>=20
>> Thank you for suggesting this. However, I recently went through my =
file systems to fix disk alignment. I ended up aligning them to 1M =
blocks, which raised the throughput from 6M/s to about 60-80MB/s which =
is what I am seeing today.
>>=20
>> # gpart show
>> ...
>> =3D>       34  250069613  ada3  GPT  (119G)
>>         34       2014        - free -  (1M)
>>       2048  250067599     1  freebsd-ufs  (119G)
>>=20
>> Do you think I need to revisit alignment?
>=20
> I don't have the specific device you have, but looking at the test =
results
> from a random site for the same drive and firmware, they got 465 =
random IOPS
> for a 0.5KB block size and a lot more than 60-80MB/sec.  I get =
60-80MB/sec
> from a WD green drive in a pure write situation (admitedly using ZFS),
> so I'm a bit surprised you're seeing similar performance from your =
SSD,=20
> although now I look at it, the drive appears to be an older model.  It =
could
> be that you're running into issues where the drive is working hard as
> all the flash blocks need to be erased before reuse.  You may get some
> improvement if you tweak the filesystem block size to the SSD block =
size.
> TRIM may also help if the drive supports it.
>=20
> Regards,
>=20
> Gary


--
Kees Jan

http://java-monitor.com/
kjkoster@kjkoster.org
+31651838192

Change is good. Granted, it is good in retrospect, but change is good.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?065831B2-A996-4400-968B-494B474784F6>