From owner-freebsd-current Mon Aug 3 00:26:14 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id AAA10871 for freebsd-current-outgoing; Mon, 3 Aug 1998 00:26:14 -0700 (PDT) (envelope-from owner-freebsd-current@FreeBSD.ORG) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.15.68.22]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id AAA10866 for ; Mon, 3 Aug 1998 00:26:12 -0700 (PDT) (envelope-from bde@godzilla.zeta.org.au) Received: (from bde@localhost) by godzilla.zeta.org.au (8.8.7/8.8.7) id RAA16337; Mon, 3 Aug 1998 17:26:02 +1000 Date: Mon, 3 Aug 1998 17:26:02 +1000 From: Bruce Evans Message-Id: <199808030726.RAA16337@godzilla.zeta.org.au> To: current@FreeBSD.ORG, rock@cs.uni-sb.de Subject: Re: IO performance (UFS read clustering), bad ZIP drive performance Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG >I get unacceptable performance from my SCSI ZIP drive (compared to other >operating systems). While UFS writing is OK (up to the capabilities of the >drive: 600-1000 kB/s), everything else (UFS read, MSDOSFS r/w, mtools) >is extremely slow (down to 80 kB/s). ufs clustered reads have been broken (except on wd drives) since rev.1.18 (1998/01/24) of ufs_bmap.c. SCSI ZIPs have a huge command overhead (20 msec here on an ncr 53c810) and don't seem to support tags, so performance without clustering is poor. Fix: diff -c2 ufs_bmap.c~ ufs_bmap.c *** ufs_bmap.c~ Mon Jul 6 14:07:01 1998 --- ufs_bmap.c Mon Aug 3 16:08:26 1998 *************** *** 164,168 **** } ! if (maxrun == 0) { vp->v_maxio = DFLTPHYS; maxrun = DFLTPHYS / blksize; --- 164,168 ---- } ! if (maxrun <= 0) { vp->v_maxio = DFLTPHYS; maxrun = DFLTPHYS / blksize; Performance is poor for other normal disk file systems because clustering is only implemented for ufs (including ffs and ext2fs). Performance is poor for specfs since it uses a too-small block size and doesn't implement clustering. Since ufs normally uses clustered reads instead of plain read-ahead, even read-ahead has been broken. This is probably unimportant on modern drives, since the drive does the read-ahead. The breakage may even be an optimization - O/S read-ahead takes more CPU and may confuse the drive. Similarly for O/S read clustering except on drives with a large command overhead - it takes even more CPU and is even more likely to confuse the drive. This is probably why the breakage wasn't noticed before. >Why does the (now simulated?) block device divide I/O into chunks of only 2k? It has to use a fixed block size for various reasons. The size seems to be 2K for historical reasons. 2K was a lot of memory 20 years ago. Bruce To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message