From owner-cvs-all Wed Jan 28 08:36:37 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id IAA23813 for cvs-all-outgoing; Wed, 28 Jan 1998 08:36:37 -0800 (PST) (envelope-from owner-cvs-all@FreeBSD.ORG) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.19]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id IAA23766; Wed, 28 Jan 1998 08:36:17 -0800 (PST) (envelope-from bde@godzilla.zeta.org.au) Received: (from bde@localhost) by godzilla.zeta.org.au (8.8.7/8.8.7) id DAA13107; Thu, 29 Jan 1998 03:35:00 +1100 Date: Thu, 29 Jan 1998 03:35:00 +1100 From: Bruce Evans Message-Id: <199801281635.DAA13107@godzilla.zeta.org.au> To: bde@zeta.org.au, mike@smith.net.au Subject: Re: cvs commit: src/sys/i386/isa wfd.c Cc: cvs-all@FreeBSD.ORG, cvs-committers@FreeBSD.ORG, cvs-sys@FreeBSD.ORG, msmith@FreeBSD.ORG Sender: owner-cvs-all@FreeBSD.ORG Precedence: bulk >> > Modified files: >> > sys/i386/isa wfd.c >> > Log: >> > Fix operation with the Iomega Zip 100 ATAPI. >> > All known versions of this drive (firmware 21.* and 23.*) will lock up >> > if presented with a read/write request of > 64 blocks. In the presence >> > of such a unit, I/O requests of > 64 blocks are fragmented to avoid >> > this. >> > >> > Revision Changes Path >> > 1.3 +92 -13 src/sys/i386/isa/wfd.c >> >> You could simply reject such transfers. > >And then what happens to them? I spent some time trying to understand Nothing good. The correct behaviour seems to be to reduce the count (b_bcount) to whatever can be handled and continue, like all (?) SCSI drivers do. I doubt that this actually works except for raw i/o. Filesystem blocksizes and cluster sizes have been limited to 64K until recently, and all (?) SCSI adaptors can handle that much, so the reduction has probably only been tested for raw i/o. It works as follows: first physio() calls a function called minphys() which reduces the limits the size to 64K. Then it loops calls the driver strategy function until all i/o has been handled. The driver strategy function may reduce the count to whatever it wants. There are a couple more layers of minphys()-like functions in the SCSI drivers. >> The new d_maxio element in >> struct bdevsw should limit clustering allow physio() to do the deblocking >> for raw i/o. > >Nobody seems to call physio. And d_maxio isn't present in 2.2.*, while All character devices by default call it indirectly via rawread() and rawwrite(). This is mostly automagic, but doesn't handle per-device limits well. The new d_maxio field is per-device-driver, so it is not as flexible as the per-adaptor SCSI minphys(). OTOH, reducing the count after a request has been built is not good. Higher layers should know about the limit so that they can build a smaller request. This could be handled using a function for d_maxio. >it was important that the fix be backportable. If d_maxio actually >works (I recall John's commit implying that it wasn't completely done >yet), then it would make sense to shift to using that for -current once >we know that the fragmentation approach works. I think it works for wd, but many drivers are missing support for it. It only takes one line per driver: xx_bdevsw.d_maxio = ; immediately after xx_bdevsw is initialized. >I actually thought I was on a winner setting D_NOCLUSTERRW in the >bdevsw, but the msdosfs code reads in MAXPHYS slabs (I assume for its >in-core copy of the FAT). I can assure you that if I thought there was >a portable way to restrict the I/O size at a higher level I would take >it. msfosfs actually uses MAXBSIZE, which is now different from MAXPHYS. It should use DFLTBSIZE. MAXBSIZE is abused in several places that just need a "large" size. This was good when MAXBSIZE was 16K. When MAXBSIZE became 64K, many buffers became oversized. The FAT buffer is one. The main wastage is probably in stdio buffers for slow cdevs. st_blksize is MAXBSIZE for most (all?) cdevs, and stdio naively believes that this is a good size for i/o. Bruce