Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 13 Jan 2010 15:39:16 -0600
From:      "Doug Poland" <doug@polands.org>
To:        "Ivan Voras" <ivoras@freebsd.org>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: 8.0-R-p2 ZFS: unixbench causing kmem exhaustion panic
Message-ID:  <3aa09fd8723749d1fa65f1b9a6faac60.squirrel@email.polands.org>
In-Reply-To: <9bbcef731001131157h256c4d14mbb241bc4326405f8@mail.gmail.com>
References:  <8418112cdfada93d83ca0cb5307c1d21.squirrel@email.polands.org> <hil1e8$ebs$1@ger.gmane.org> <b78f9b16683331ad0f574ecfc1b7f995.squirrel@email.polands.org> <9bbcef731001131035x604cdea1t81b14589cb10ad25@mail.gmail.com> <b41ca31fbeacf104143509e8cba2fe66.squirrel@email.polands.org> <9bbcef731001131157h256c4d14mbb241bc4326405f8@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, January 13, 2010 13:57, Ivan Voras wrote:
> 2010/1/13 Doug Poland <doug@polands.org>:
>>
>
> Can you monitor and record kstat.zfs.misc.arcstats.size sysctl while
> the test is running (and crashing)?
>
> This looks curious - your kmem_max is ~~ 1.2 GB, arc_max is 0.5 GB and
> you are still having panics. Is there anything unusual about your
> system? Like unusually slow CPU, unusually fast or slow drives?
>
> I don't have any ideas smarter than reducing arc_max by half then try
> again and continue reducing it until it works. It would be very
> helpful if you could monitor the kstat.zfs.misc.arcstats.size sysctl
> while you are doing the tests to document what is happening to the
> system. If it by any chance stays the same you should probably monitor
> "vmstat -m".
>
>
Ok, I re-ran with same config, but this time monitoring the sysctls
you requested* ( and the rest I was watching ):

panic: kmem_malloc(131072): kmem_map too small: 1292869632 total
allocated
cpuid = 0

* kstat.zfs.misc.arcstats.size: 166228176
  vfs.numvnodes: 2848
  vfs.zfs.arc_max: 536870912
  vfs.zfs.arc_meta_limit: 134217728
  vfs.zfs.arc_meta_used: 132890832
  vfs.zfs.arc_min: 67108864
  vfs.zfs.cache_flush_disable: 0
  vfs.zfs.debug: 0
  vfs.zfs.mdcomp_disable: 0
  vfs.zfs.prefetch_disable: 1
  vfs.zfs.recover: 0
  vfs.zfs.scrub_limit: 10
  vfs.zfs.super_owner: 0
  vfs.zfs.txg.synctime: 5
  vfs.zfs.txg.timeout: 30
  vfs.zfs.vdev.aggregation_limit: 131072
  vfs.zfs.vdev.cache.bshift: 16
  vfs.zfs.vdev.cache.max: 16384
  vfs.zfs.vdev.cache.size: 10485760
  vfs.zfs.vdev.max_pending: 35
  vfs.zfs.vdev.min_pending: 4
  vfs.zfs.vdev.ramp_rate: 2
  vfs.zfs.vdev.time_shift: 6
  vfs.zfs.version.acl: 1
  vfs.zfs.version.dmu_backup_header: 2
  vfs.zfs.version.dmu_backup_stream: 1
  vfs.zfs.version.spa: 13
  vfs.zfs.version.vdev_boot: 1
  vfs.zfs.version.zpl: 3
  vfs.zfs.zfetch.array_rd_sz: 1048576
  vfs.zfs.zfetch.block_cap: 256
  vfs.zfs.zfetch.max_streams: 8
  vfs.zfs.zfetch.min_sec_reap: 2
  vfs.zfs.zil_disable: 0
  vm.kmem_size: 1327202304
  vm.kmem_size_max: 329853485875
  vm.kmem_size_min: 0
  vm.kmem_size_scale: 3
* vmstat -m | grep solaris: 1496232960



-- 
Regards,
Doug




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3aa09fd8723749d1fa65f1b9a6faac60.squirrel>