From owner-freebsd-questions@freebsd.org Wed Jul 3 11:35:07 2019 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5C0D015D0B2C for ; Wed, 3 Jul 2019 11:35:07 +0000 (UTC) (envelope-from markand@malikania.fr) Received: from smtp.smtpout.orange.fr (smtp05.smtpout.orange.fr [80.12.242.127]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client CN "Bizanga Labs SMTP Client Certificate", Issuer "Bizanga Labs CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 373F36FA25 for ; Wed, 3 Jul 2019 11:35:05 +0000 (UTC) (envelope-from markand@malikania.fr) Received: from postfix.malikania.fr ([5.135.187.121]) by mwinf5d62 with ME id YBb2200062dbEiD03Bb2Tx; Wed, 03 Jul 2019 13:35:03 +0200 X-ME-Helo: postfix.malikania.fr X-ME-Auth: ZGVtZWxpZXIuZGF2aWRAb3JhbmdlLmZy X-ME-Date: Wed, 03 Jul 2019 13:35:03 +0200 X-ME-IP: 5.135.187.121 Received: from [167.3.108.158] (unknown [77.159.242.250]) by postfix.malikania.fr (Postfix) with ESMTPSA id 5E0C4227381 for ; Wed, 3 Jul 2019 13:34:59 +0200 (CEST) To: freebsd-questions@freebsd.org From: David Demelier Subject: extremely slow disk I/O after updating to 12.0 Message-ID: Date: Wed, 3 Jul 2019 13:34:58 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: fr Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 373F36FA25 X-Spamd-Bar: ++++ Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [4.67 / 15.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_SPAM_SHORT(0.94)[0.937,0]; MIME_GOOD(-0.10)[text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-questions@freebsd.org]; TO_DN_NONE(0.00)[]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1]; RCVD_COUNT_THREE(0.00)[3]; RCVD_TLS_LAST(0.00)[]; MX_GOOD(-0.01)[malikania.fr]; NEURAL_SPAM_LONG(1.00)[1.000,0]; RCVD_IN_DNSWL_NONE(0.00)[127.242.12.80.list.dnswl.org : 127.0.5.0]; NEURAL_SPAM_MEDIUM(0.99)[0.993,0]; R_SPF_NA(0.00)[]; DMARC_NA(0.00)[malikania.fr]; RWL_MAILSPIKE_POSSIBLE(0.00)[127.242.12.80.rep.mailspike.net : 127.0.0.17]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:3215, ipnet:80.12.240.0/20, country:FR]; MID_RHS_MATCH_FROM(0.00)[]; IP_SCORE(0.85)[ip: (1.59), ipnet: 80.12.240.0/20(1.43), asn: 3215(1.23), country: FR(-0.01)]; FROM_EQ_ENVFROM(0.00)[] X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2019 11:35:07 -0000 Hello folks, I've upgraded one of my servers to 12.0-RELEASE. Now I'm having extremely slow performances regarding I/O writes. I'm doing a cron job which create some tar archive during the night, usually it takes something like 2-3 hours to complete. Now it does not finish 12 hours after and I feel the server is much slower in a global manner. For example when editing files with vim. The bsdtar process stays a lot in the state zio->io_cv as top indicates but is not locked as the archive continues to grow (at ridiculous speed though, something like 4Mo per hour !) Procstat on the process shows this: # procstat -kk 46385 PID TID COMM TDNAME KSTACK 46385 101443 bsdtar - mi_switch+0xe1 sleepq_wait+0x2c _cv_wait+0x152 zio_wait+0x9b dmu_buf_hold_array_by_dnode+0x2ec dmu_read_uio_dnode+0x37 dmu_read_uio_dbuf+0x3b zfs_freebsd_read+0x2d3 VOP_READ_APV+0x78 vn_read+0x195 vn_io_fault_doio+0x43 vn_io_fault1+0x161 vn_io_fault+0x195 dofileread+0x95 sys_read+0xc3 amd64_syscall+0x369 fast_syscall_common+0x101 zpool status indicates that the blocksize is erroneous and that I may expect performance degradation. But that much is impressive. Can someone confirm? # zpool status pool: tank state: ONLINE status: One or more devices are configured to use a non-native block size. Expect reduced performance. action: Replace affected devices with devices that support the configured block size, or migrate data to a properly configured pool. scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/zfs0 ONLINE 0 0 0 block size: 512B configured, 4096B native gpt/zfs1 ONLINE 0 0 0 block size: 512B configured, 4096B native errors: No known data errors According to some googling, I must update those pools to change the block size. However there are no many articles on that so I'm a bit afraid of doing this. The zfs0 and zfs1 are in raidz. Any help is very welcome. Kind regards, -- David