Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jul 2019 13:34:58 +0200
From:      David Demelier <markand@malikania.fr>
To:        freebsd-questions@freebsd.org
Subject:   extremely slow disk I/O after updating to 12.0
Message-ID:  <b76b814f-e440-67a9-5424-4e7c5d03d5ca@malikania.fr>

next in thread | raw e-mail | index | archive | help
Hello folks,

I've upgraded one of my servers to 12.0-RELEASE. Now I'm having 
extremely slow performances regarding I/O writes.

I'm doing a cron job which create some tar archive during the night, 
usually it takes something like 2-3 hours to complete. Now it does not 
finish 12 hours after and I feel the server is much slower in a global 
manner. For example when editing files with vim.

The bsdtar process stays a lot in the state zio->io_cv as top indicates 
but is not locked as the archive continues to grow (at ridiculous speed 
though, something like 4Mo per hour !)

Procstat on the process shows this:

# procstat -kk 46385
   PID    TID COMM                TDNAME              KSTACK
46385 101443 bsdtar              -                   mi_switch+0xe1 
sleepq_wait+0x2c _cv_wait+0x152 zio_wait+0x9b 
dmu_buf_hold_array_by_dnode+0x2ec dmu_read_uio_dnode+0x37 
dmu_read_uio_dbuf+0x3b zfs_freebsd_read+0x2d3 VOP_READ_APV+0x78 
vn_read+0x195 vn_io_fault_doio+0x43 vn_io_fault1+0x161 vn_io_fault+0x195 
dofileread+0x95 sys_read+0xc3 amd64_syscall+0x369 fast_syscall_common+0x101

zpool status indicates that the blocksize is erroneous and that I may 
expect performance degradation. But that much is impressive. Can someone 
confirm?

# zpool status
   pool: tank
  state: ONLINE
status: One or more devices are configured to use a non-native block size.
         Expect reduced performance.
action: Replace affected devices with devices that support the
         configured block size, or migrate data to a properly configured
         pool.
   scan: none requested
config:

         NAME          STATE     READ WRITE CKSUM
         tank          ONLINE       0     0     0
           raidz1-0    ONLINE       0     0     0
             gpt/zfs0  ONLINE       0     0     0  block size: 512B 
configured, 4096B native
             gpt/zfs1  ONLINE       0     0     0  block size: 512B 
configured, 4096B native

errors: No known data errors



According to some googling, I must update those pools to change the 
block size. However there are no many articles on that so I'm a bit 
afraid of doing this. The zfs0 and zfs1 are in raidz.

Any help is very welcome.

Kind regards,

-- 
David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b76b814f-e440-67a9-5424-4e7c5d03d5ca>