From owner-freebsd-hackers Fri Nov 17 21:41:54 1995 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.6.12/8.6.6) id VAA25082 for hackers-outgoing; Fri, 17 Nov 1995 21:41:54 -0800 Received: from hq.icb.chel.su (icb-rich-gw.icb.chel.su [193.125.10.34]) by freefall.freebsd.org (8.6.12/8.6.6) with ESMTP id VAA25077 for ; Fri, 17 Nov 1995 21:41:47 -0800 Received: from localhost (babkin@localhost) by hq.icb.chel.su (8.6.5/8.6.5) id KAA09756; Sat, 18 Nov 1995 10:45:08 +0500 From: "Serge A. Babkin" Message-Id: <199511180545.KAA09756@hq.icb.chel.su> Subject: Re: NFS benchmarking: iosize granularity? To: terry@lambert.org (Terry Lambert) Date: Sat, 18 Nov 1995 10:45:07 +0500 (GMT+0500) Cc: terry@lambert.org, hackers@freebsd.org In-Reply-To: <199511171729.KAA05706@phaeton.artisoft.com> from "Terry Lambert" at Nov 17, 95 10:29:26 am X-Mailer: ELM [version 2.4 PL23] Content-Type: text Content-Length: 3154 Sender: owner-hackers@freebsd.org Precedence: bulk [...] > > May be. Actually I have experimented with a SCO client mounting FreeBSD disks > > with different [rw]size and executing dd with different block sizes. The > > numbers I got are: > > > > [rw]size=8192 dd bs=100k: 150K/s write 705K/s read > > [rw]size=8192 dd bs=1k: 122K/s write 705K/s read > > [rw]size=8192 dd bs=512: 28K/s write 688K/s read > > > > [rw]size=2048 dd bs=100k: 53K/s write 316K/s read > > [rw]size=2048 dd bs=1536: 40K/s write > > [rw]size=2048 dd bs=1234: 33K/s write > > [rw]size=2048 dd bs=1025: 28K/s write > > [rw]size=2048 dd bs=1k: 52K/s write 307K/s read > > [rw]size=2048 dd bs=512: 28K/s write > > > > [rw]size=1024 dd bs=100k: 27K/s write 691K/s read > > [rw]size=1024 dd bs=1k: 27K/s write 690K/s read > > [rw]size=1024 dd bs=512: 28K/s write 651K/s read > > > > With DOS Tsoft's client I got 12K/s write and 200K/s read independently of > > [rw]size when testing with sysinfo (I'm not shure was it Norton's or > > PC-tools). > > I think you will find that the optimal write size for an NetBIOS based > NFS client will be 512 bytes because of the way DOS does file I/O. > > For Win95, the optimal size will be 512b, followed closely by 32k > (32k is highly typical of VFAT). > > I would be extremely iterested in your SCO/FreeBSD numbers for 4096, as > opposed to 8192 or 2048. 4096 is the natural page size for both systems > and would seem the correct and logical choice, barring use of NE2000 or > similar network cards with an imbalance in read/write buffering in the > FreeBSD driver itself. OK: [rw]size=4096 dd bs=100k: 90K/s write 415K/s read [rw]size=4096 dd bs=1k: 89K/s write 415K/s read [rw]size=4096 dd bs=512: 27K/s write 403K/s read [rw]size=512 dd bs=100k: 14K/s write 143K/s read [rw]size=512 dd bs=1k: 14K/s write 142K/s read [rw]size=512 dd bs=512: 14K/s write 139K/s read These numbers are very like DOS, huh ? > > The network was Ethernet, all network cards are 3c509B except the on in SCO > > server which was 3c579. The DOS client was connected through a 3COM TP hub, > > FreeBSD and SCO are on thin Ethernet. > > > > The results are looking like there is some problem with write requests of > > 512 bytes size. > > What is the FS block and frag size on the boxes? FreeBSD has bsize=8192 fsize=1024. SCO box has EAFS and HTFS filesystems, I cannot get the exact values for them but it looks like fsize=512 and bsize=8192 or fsize=1024 and bsize=16384. Fro write test I have done: [On FreeBSD] rm /tmp/xxx [On SCO ] time dd if=/unix of=/mnt/xxx bs=NNN [ or ] time dd if=/usr/bin/perl of=/mnt/xxx bs=NNN I have copied /unix (about 2.6M) for fast transfers and /usr/bin/perl (about 500K) for slow transfers, I tried to copy /unix for slow transfer too but the throughput were the same. For read test I have done: [On FreeBSD] dd if=/dev/zero of=/tmp/xxx bs=100k count=100 [once] [On SCO ] time dd if=/mnt/xxx of=/dev/null bs=NNN > It could be that the FS can only do I/O in units of 1k? May be... Serge Babkin ! (babkin@hq.icb.chel.su) ! Headquarter of Joint Stock Commercial Bank "Chelindbank" ! Chelyabinsk, Russia