Date: Thu, 20 Sep 2018 15:15:39 +0900 From: KIRIYAMA Kazuhiko <kiri@kx.openedu.org> To: Rick Macklem <rmacklem@uoguelph.ca> Cc: KIRIYAMA Kazuhiko <kiri@kx.openedu.org>, "Andrey V. Elsukov" <bu7cher@yandex.ru>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org> Subject: Re: NFS poor performance in ipfw_nat Message-ID: <201809200615.w8K6FdpU082226@kx.openedu.org> In-Reply-To: <YTOPR0101MB1820C225901C5716B4265725DD1C0@YTOPR0101MB1820.CANPRD01.PROD.OUTLOOK.COM> References: <201809172253.w8HMrXSS025987@kx.openedu.org> <8315728b-afe9-7631-d2ad-2d9b06c3d72d@yandex.ru> <201809190033.w8J0X0J5051781@kx.openedu.org>
next in thread | previous in thread | raw e-mail | index | archive | help
At Wed, 19 Sep 2018 13:57:04 +0000, Rick Macklem wrote: > > KIRIYAMA Kazuhiko wrote: > [good stuff snipped] > > > > Thanks for your advice. Add '-lro' and '-tso' to ifconfig, > > transfer rate up to almost native NIC speed: > > > > # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m > > 1048576+0 records in > > 1048576+0 records out > > 1073741824 bytes transferred in 10.688162 secs (100460852 bytes/sec) > > # > > > > BTW in VM on behyve, transfer rate to NFS mount of VM server > > (bhyve) is appreciably low level: > > > > # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m > > 1048576+0 records in > > 1048576+0 records out > > 1073741824 bytes transferred in 32.094448 secs (33455687 bytes/sec) > > > >This was limited by disk transfer speed: > > > ># dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m > >1048576+0 records in > >1048576+0 records out > >1073741824 bytes transferred in 21.692358 secs (49498623 bytes/sec) > ># > It sounds like this is resolved, thanks to Andrey. I've surprised that disk transfer speed is slower than net transfer speed. Incidentally for my laptop PC eMMC: # dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m 1048576+0 records in 1048576+0 records out 1073741824 bytes transferred in 30.276720 secs (35464271 bytes/sec) # and for my VM behyve hypervisor RAID-Z3: # dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m 1048576+0 records in 1048576+0 records out 1073741824 bytes transferred in 24.832563 secs (43239267 bytes/sec) # HDD slightly prevailed over eMMC ;-p > > If you have more problems like this, another thing to try is reducing the I/O > size with mount options at the client. > For example, you might try adding "rsize=4096,wsize=4096" to your mount and > then increase the size by powers of 2 (8192, 16384,32768) and see which size > works best. (This is another way to work around TSO problems. It also helps > when a net interface or packet filter can't keep up with a burst of 40+ ethernet > packets, which is what is generated when 64K I/O is used.) > > Btw, doing "nfsstat -m" on the client will show you what mount options are > actually being used. This can be useful information. > > Good to hear it has been resolved, rick > [more stuff snipped] > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > --- KIRIYAMA Kazuhiko
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201809200615.w8K6FdpU082226>