Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 17 Sep 2011 16:02:21 +0400
From:      Lytochkin Boris <lytboris@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   [ZFS] starving reads while idle disks
Message-ID:  <CAEJYa-Si%2B4Tj5sj8fuxWfqjZgMX1cB8y=JWqJqe_F%2BR4Er9g_A@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Hi.

My ZFS RAID10 pool shows suprisingly starving read performance.
Being built on 18 15k SAS drives it shows 1-2Mb/s or less when
performing reads (tar, rsync).

While runnig `tar cf - . >/dev/null', `vmstat -i' shows ~200
interrupts per second for mpt. If I enable scrub on that pool,
interrupts bump upto 5k ips resulting 12Mb/s scrub speed and disks
busy percentage raises upto 100%. No warnings are shown in logs in
both cases.

It is 8.2-STABLE built on Jun 9, 2011. System build on Sept 6, 2001
performs the same way.

tar stucks in `zio->cv_io)' state for all time.

This read speed is too low for such environment for sure. I spent some
time trying to dig & fix that but still no luck.
So debug/tune suggestions are greatly welcome.

>zpool status
  pool: tank
 state: ONLINE
 scan: scrub canceled on Sat Sep 17 13:04:53 2011
config:

        NAME        STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da1     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0
          mirror-2  ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da12    ONLINE       0     0     0
          mirror-3  ONLINE       0     0     0
            da5     ONLINE       0     0     0
            da13    ONLINE       0     0     0
          mirror-4  ONLINE       0     0     0
            da6     ONLINE       0     0     0
            da14    ONLINE       0     0     0
          mirror-5  ONLINE       0     0     0
            da7     ONLINE       0     0     0
            da15    ONLINE       0     0     0
          mirror-6  ONLINE       0     0     0
            da8     ONLINE       0     0     0
            da16    ONLINE       0     0     0
          mirror-7  ONLINE       0     0     0
            da9     ONLINE       0     0     0
            da17    ONLINE       0     0     0
          mirror-8  ONLINE       0     0     0
            da10    ONLINE       0     0     0
            da18    ONLINE       0     0     0


Running:
>tar cf - . >/dev/null
load: 0.16  cmd: bsdtar 6360 [zio->io_cv)] 0.88r 0.00u 0.05s 0% 2492k


iostat:
device     r/s   w/s    kr/s    kw/s wait svc_t  %b  us ni sy in id
da0       19.9   0.0   201.5     0.0    0   3.5   6   0  0  0  0 100
da1       17.9   0.0   185.6     0.0    0   7.5   9
da2       20.9   0.0   202.5     0.0    0   3.9   5
da3       34.8   0.0   289.5     0.0    0   4.2   7
da4        1.0   0.0    13.9     0.0    0   8.4   1
da5       17.9   0.0     9.0     0.0    0   0.8   1
da6       59.7   0.0   576.6     0.0    0   4.8  11
da7       99.5   0.0  1235.8     0.0    0   7.2  32
da8       38.8   0.0   404.5     0.0    0   4.3   7
da9       12.9   0.0   157.2     0.0    3   5.6   6
da10      32.8   0.0   425.4     0.0    0   4.8   8
da12      20.9   0.0    37.8     0.0    0   2.1   4
da13       2.0   0.0    27.9     0.0    0   1.8   0
da14      55.7   0.0   463.7     0.0    0   4.2  11
da15     114.4   0.0  1439.7     0.0    0   7.9  34
da16      38.8   0.0   344.3     0.0    0   5.0   9
da17      51.7   0.0   395.0     0.0    2   5.0  12
da18      62.7   0.0   632.3     0.0    0   5.0  14


ZFS-related sysctls:
vfs.zfs.l2c_only_size: 0
vfs.zfs.mfu_ghost_data_lsize: 26866176
vfs.zfs.mfu_ghost_metadata_lsize: 523702272
vfs.zfs.mfu_ghost_size: 550568448
vfs.zfs.mfu_data_lsize: 2825181184
vfs.zfs.mfu_metadata_lsize: 158866432
vfs.zfs.mfu_size: 3115283456
vfs.zfs.mru_ghost_data_lsize: 91758592
vfs.zfs.mru_ghost_metadata_lsize: 51310592
vfs.zfs.mru_ghost_size: 143069184
vfs.zfs.mru_data_lsize: 10066080768
vfs.zfs.mru_metadata_lsize: 915701760
vfs.zfs.mru_size: 23198790656
vfs.zfs.anon_data_lsize: 0
vfs.zfs.anon_metadata_lsize: 0
vfs.zfs.anon_size: 7094272
vfs.zfs.l2arc_norw: 1
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 1
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_write_boost: 8388608
vfs.zfs.l2arc_write_max: 8388608
vfs.zfs.arc_meta_limit: 24730226688
vfs.zfs.arc_meta_used: 15541743536
vfs.zfs.arc_min: 12365113344
vfs.zfs.arc_max: 98920906752
vfs.zfs.dedup.prefetch: 1
vfs.zfs.mdcomp_disable: 0
vfs.zfs.write_limit_override: 0
vfs.zfs.write_limit_inflated: 309157871616
vfs.zfs.write_limit_max: 12881577984
vfs.zfs.write_limit_min: 33554432
vfs.zfs.write_limit_shift: 3
vfs.zfs.no_write_throttle: 1
vfs.zfs.zfetch.array_rd_sz: 1048576
vfs.zfs.zfetch.block_cap: 256
vfs.zfs.zfetch.min_sec_reap: 2
vfs.zfs.zfetch.max_streams: 8
vfs.zfs.prefetch_disable: 1
vfs.zfs.check_hostid: 1
vfs.zfs.recover: 0
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
vfs.zfs.scrub_limit: 10
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 10485760
vfs.zfs.vdev.cache.max: 16384
vfs.zfs.vdev.write_gap_limit: 4096
vfs.zfs.vdev.read_gap_limit: 32768
vfs.zfs.vdev.aggregation_limit: 131072
vfs.zfs.vdev.ramp_rate: 2
vfs.zfs.vdev.time_shift: 6
vfs.zfs.vdev.min_pending: 4
vfs.zfs.vdev.max_pending: 96
vfs.zfs.vdev.bio_flush_disable: 0
vfs.zfs.cache_flush_disable: 0
vfs.zfs.zil_replay_disable: 0
vfs.zfs.zio.use_uma: 0
vfs.zfs.version.zpl: 5
vfs.zfs.version.spa: 28
vfs.zfs.version.acl: 1
vfs.zfs.debug: 0
vfs.zfs.super_owner: 0
kstat.zfs.misc.xuio_stats.onloan_read_buf: 0
kstat.zfs.misc.xuio_stats.onloan_write_buf: 0
kstat.zfs.misc.xuio_stats.read_buf_copied: 0
kstat.zfs.misc.xuio_stats.read_buf_nocopy: 0
kstat.zfs.misc.xuio_stats.write_buf_copied: 0
kstat.zfs.misc.xuio_stats.write_buf_nocopy: 28044
kstat.zfs.misc.zfetchstats.hits: 17756
kstat.zfs.misc.zfetchstats.misses: 950
kstat.zfs.misc.zfetchstats.colinear_hits: 0
kstat.zfs.misc.zfetchstats.colinear_misses: 950
kstat.zfs.misc.zfetchstats.stride_hits: 17329
kstat.zfs.misc.zfetchstats.stride_misses: 0
kstat.zfs.misc.zfetchstats.reclaim_successes: 0
kstat.zfs.misc.zfetchstats.reclaim_failures: 950
kstat.zfs.misc.zfetchstats.streams_resets: 1
kstat.zfs.misc.zfetchstats.streams_noresets: 427
kstat.zfs.misc.zfetchstats.bogus_streams: 0
kstat.zfs.misc.arcstats.hits: 152508694
kstat.zfs.misc.arcstats.misses: 2951255
kstat.zfs.misc.arcstats.demand_data_hits: 25279174
kstat.zfs.misc.arcstats.demand_data_misses: 2001333
kstat.zfs.misc.arcstats.demand_metadata_hits: 127229278
kstat.zfs.misc.arcstats.demand_metadata_misses: 946992
kstat.zfs.misc.arcstats.prefetch_data_hits: 0
kstat.zfs.misc.arcstats.prefetch_data_misses: 14
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 242
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 2916
kstat.zfs.misc.arcstats.mru_hits: 13748947
kstat.zfs.misc.arcstats.mru_ghost_hits: 0
kstat.zfs.misc.arcstats.mfu_hits: 138759522
kstat.zfs.misc.arcstats.mfu_ghost_hits: 0
kstat.zfs.misc.arcstats.allocated: 9532012
kstat.zfs.misc.arcstats.deleted: 10
kstat.zfs.misc.arcstats.stolen: 0
kstat.zfs.misc.arcstats.recycle_miss: 0
kstat.zfs.misc.arcstats.mutex_miss: 0
kstat.zfs.misc.arcstats.evict_skip: 0
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_l2_eligible: 0
kstat.zfs.misc.arcstats.evict_l2_ineligible: 2048
kstat.zfs.misc.arcstats.hash_elements: 2959306
kstat.zfs.misc.arcstats.hash_elements_max: 2960693
kstat.zfs.misc.arcstats.hash_collisions: 16094630
kstat.zfs.misc.arcstats.hash_chains: 864066
kstat.zfs.misc.arcstats.hash_chain_max: 11
kstat.zfs.misc.arcstats.p: 49460453376
kstat.zfs.misc.arcstats.c: 98920906752
kstat.zfs.misc.arcstats.c_min: 12365113344
kstat.zfs.misc.arcstats.c_max: 98920906752
kstat.zfs.misc.arcstats.size: 28440168752
kstat.zfs.misc.arcstats.hdr_size: 967688072
kstat.zfs.misc.arcstats.data_size: 26321250304
kstat.zfs.misc.arcstats.other_size: 1151230376
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 1
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 32536
kstat.zfs.misc.vdev_cache_stats.hits: 508797
kstat.zfs.misc.vdev_cache_stats.misses: 267216



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAEJYa-Si%2B4Tj5sj8fuxWfqjZgMX1cB8y=JWqJqe_F%2BR4Er9g_A>