Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Sep 2010 18:32:08 +0200
From:      Wiktor Niesiobedzki <bsd@vink.pl>
To:        grarpamp <grarpamp@gmail.com>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: Sequential disk IO saturates system
Message-ID:  <AANLkTi=1qjnM4On4%2B9AiwC4ZMg1foezDD=2fuhc04Z%2BV@mail.gmail.com>
In-Reply-To: <AANLkTikLbP-UGkzg9R4dzTJUF-36gnJzk3NyUz0ea%2B6_@mail.gmail.com>
References:  <AANLkTikLbP-UGkzg9R4dzTJUF-36gnJzk3NyUz0ea%2B6_@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

You may try to play with kern.sched.preempt_thresh  setting (as per
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=3D665455+0+archive/2010/freebs=
d-stable/20100905.freebsd-stable).

Renice'ing the process doesn't give any improvement, as this is g_eli*
thread that is consuming your CPU, which has pretty high priority.

Since my last update, I don't see that much of the problem, but
previously, dd if=3D/dev/gzero.eli of=3D/dev/null bs=3D1M, could cause CPU
starvation of any other processes. Now that don't happen anymore
(though I see some performance drops during txg commits, eg. in
network throughput)

I've also changed vfs.zfs.txg.synctime to 1 second (default - 5
seconds), so txg commits are shorter, though more often. This help
alleviate my problems. YMMV.


Cheers,

Wiktor Niesiobedzki


2010/9/14 grarpamp <grarpamp@gmail.com>:
> We have [re]nice to deal with user processes.
>
> Is there no way to effectively rate limit the disk pipe? As it is
> now, this machine can't do any userland work because it's completely
> buried by the simple degenerate case of:
> =C2=A0cp /fs_a/.../giga_size_files /fs_b/...
>
> Geli and zfs are in use, yet that doesn't seem to be an excuse for
> this behavior.
>
> I can read 60MB/s off the raw spindles without much issue.
>
> Yet add geli and I get like 15MB/s, which is completely fine as
> well, except the box gets swamped in system time when doing that.
> And around 11MB/s off geli+zfs, caveat above swamping of course.
>
> And although they perform at about the same MB/s rates, it's the
> bulk writes that seem to thoroughly dispatch the system, far more
> than the reads do. This one really hurts and removes all usability.
>
> Sure, maybe one could set some ancient PIO mode on the [s]ata/scsi
> channels [untested here]. But it seems far less than ideal as users
> commonly mix raw and geli+zfs partitions on the same set of spindles.
>
> Is there a description of the underlying issue available?
>
> And unless I'm missing[?] something like an already existing insertable
> geom rate limit, or a way to renice kernel processes... =C2=A0is it right
> to say that FreeBSD needs these options and/or some equivalent work
> in this area?
>
> As I'm without an empty raw disk right now, I can only write to zfs
> and thus still have yet to test with writes to spindle and geli.
> Regardless, perhaps the proper solution lies with the right sort
> of future knob as in the previous paragraph?
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd=
.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTi=1qjnM4On4%2B9AiwC4ZMg1foezDD=2fuhc04Z%2BV>