Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Sep 2009 15:21:56 -0400
From:      Nathaniel W Filardo <nwf@cs.jhu.edu>
To:        freebsd-fs@freebsd.org
Subject:   Re: kern/139039: [zfs] zpool scrub makes system unbearably slow
Message-ID:  <20090925192156.GF22220@gradx.cs.jhu.edu>
In-Reply-To: <200909251828.n8PISu6Z031842@freefall.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--c+qGut8k13HZAIeS
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Sep 25, 2009 at 06:28:56PM +0000, pjd@freebsd.org wrote:
> Synopsis: [zfs] zpool scrub makes system unbearably slow
>=20
> State-Changed-From-To: open->feedback
> State-Changed-By: pjd
> State-Changed-When: ptk 25 wrz 2009 18:27:48 UTC
> State-Changed-Why:=20
> Could you tell which threads are consuming most CPU time?
> Pasting first few lines from 'top -SH' should be enough.
>=20
>=20
> Responsible-Changed-From-To: freebsd-fs->pjd
> Responsible-Changed-By: pjd
> Responsible-Changed-When: ptk 25 wrz 2009 18:27:48 UTC
> Responsible-Changed-Why:=20
> I'll take this one.
>=20
> http://www.freebsd.org/cgi/query-pr.cgi?pr=3D139039

Thanks for looking at this.

The system here is trying to build OpenLDAP in a jail, but that isn't
frequently in the top.  Typical output is...

hydra# top -jSHP
267 processes: 15 running, 236 sleeping, 16 waiting
CPU 0:  0.3% user,  0.0% nice, 97.4% system,  2.3% interrupt,  0.0% idle
CPU 1: 10.7% user,  0.0% nice, 44.4% system,  1.8% interrupt, 43.0% idle
Mem: 147M Active, 242M Inact, 926M Wired, 4008K Cache, 213M Buf, 672M Free
Swap: 4096M Total, 4096M Free

  PID JID USERNAME PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11   0 root     171 ki31     0K    64K RUN     1 376:41 57.18% {idle: cp=
u1}
    0   0 root     -16    0     0K  3520K CPU0    0   6:04 13.96% {spa_zio_=
7}
    0   0 root     -16    0     0K  3520K -       0   5:58 13.33% {spa_zio_=
0}
    0   0 root     -16    0     0K  3520K -       0   5:58 13.13% {spa_zio_=
3}
    0   0 root     -16    0     0K  3520K -       0   6:01 13.09% {spa_zio_=
5}
    0   0 root     -16    0     0K  3520K RUN     0   6:01 13.04% {spa_zio_=
2}
    0   0 root     -16    0     0K  3520K RUN     0   6:00 13.04% {spa_zio_=
6}
    0   0 root     -16    0     0K  3520K -       0   5:59 12.65% {spa_zio_=
1}
    0   0 root     -16    0     0K  3520K -       1   6:00 12.11% {spa_zio_=
4}
   42   0 root      -8    -     0K   480K spa->s  0   4:50  8.54% {txg_thre=
ad_enter}
    4   0 root      -8    -     0K    32K -       0   2:13  1.95% g_down
   12   0 root     -40    -     0K   544K WAIT    0   1:25  0.98% {swi2: ca=
mbio}
    0   0 root     -16    0     0K  3520K -       0   0:24  0.20% {spa_zio_=
7}
    0   0 root     -16    0     0K  3520K -       1   0:23  0.20% {spa_zio_=
3}
   12   0 root     -64    -     0K   544K RUN     0   0:45  0.15% {vec1860:=
 mpt0}
    0   0 root     -16    0     0K  3520K -       1   0:58  0.10% {spa_zio}
   12   0 root     -32    -     0K   544K WAIT    0   1:58  0.05% {swi4: cl=
ock}
   42   0 root      -8    -     0K   480K tx->tx  1   0:31  0.05% {txg_thre=
ad_enter}
   11   0 root     171 ki31     0K    64K RUN     0 774:48  0.00% {idle: cp=
u0}

The only thing that seems odd to me is that CPU1 is sitting essientially id=
le
(I have never seen CPU0 be idle when the system is scrubbing).  The spa_zio=
_*
threads do in fact run on CPU1, but seemingly rarely.

--nwf;

--c+qGut8k13HZAIeS
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkq9GFQACgkQTeQabvr9Tc/2eQCgiBjhY1ELCuRCm5dxGuuNVTHR
6r4AnirXBp0M0nGdAWynt76opn56eG60
=rjj6
-----END PGP SIGNATURE-----

--c+qGut8k13HZAIeS--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090925192156.GF22220>