Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 31 Oct 2013 11:10:09 +0200
From:      Vitalij Satanivskij <satan@ukr.net>
To:        Andriy Gapon <avg@FreeBSD.org>
Cc:        Vitalij Satanivskij <satan@ukr.net>, freebsd-hackers@FreeBSD.org
Subject:   Re: FreeBSD 10.0-BETA1 #8 r256765M spend too  much time in locks
Message-ID:  <20131031091008.GA15005@hell.ukr.net>
In-Reply-To: <526A4306.2060500@FreeBSD.org>
References:  <20131024074826.GA50853@hell.ukr.net> <20131024075023.GA52443@hell.ukr.net> <20131024115519.GA72359@hell.ukr.net> <20131024165218.GA82686@hell.ukr.net> <526A11B2.6090008@FreeBSD.org> <20131025072343.GA31310@hell.ukr.net> <526A4306.2060500@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Andriy Gapon wrote:
AG> on 25/10/2013 10:23 Vitalij Satanivskij said the following:
AG> > 
AG> > 
AG> > http://quad.org.ua/profiling.tgz
AG> > 
AG> > results of both methods
AG> > 
AG> > but for pmcstat to few buffers configured by default so not all statistics in summary ^( 
AG> 
AG> From these profiling results alone I do not see pathologies.
AG> It looks like you have a lot of I/O going on[*].
AG> My guess is that the I/O requests are sufficiently small and contiguous, so ZFS
AG> performs a lot for I/O aggregation.  For that it allocates and then frees a lot
AG> of temporary buffers.
AG> And it seems that that's where the locks are greatly contended and CPU is
AG> burned.  Specifically in KVA allocation in vmem_xalloc/vmem_xfree.
AG> 
AG> You can try at least two approaches.
AG> 
AG> 1. Disable I/O aggregation.
AG> See the following knobs:
AG> vfs.zfs.vdev.aggregation_limit: I/O requests are aggregated up to this size
AG> vfs.zfs.vdev.read_gap_limit: Acceptable gap between two reads being aggregated
AG> vfs.zfs.vdev.write_gap_limit: Acceptable gap between two writes being aggregated
AG> 
AG> 2. Try to improve buffer allocation performance by using uma(9) for that.
AG> vfs.zfs.zio.use_uma=1
AG> This is a boot time tunable.
AG> 
AG> Footnotes:
AG> [*] But perhaps there is some pathology that causes all that I/O to happen.  I
AG> can't tell that from the profiling data.  So this could be another thing to try
AG> to check.
AG> 


Ok some new information 

Trying to Disable I/O aggregation by setting sysctl first to smaller values then to "0" cause to very high load

Setting  vfs.zfs.zio_use_uma=1  cause system panic and reboot at starup on freebsd 10 beta1 and beta2 (no debug in kernel)

on freebsd 11 current r257395 on GENERIC kernel there is no panic 

but I heve no idea how much time will take setup and data merge on thouse system.






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20131031091008.GA15005>