Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 6 Jan 1997 13:48:59 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        andreas@klemm.gtn.com (Andreas Klemm)
Cc:        jfieber@indiana.edu, hackers@freebsd.org
Subject:   Re: New motherboard breaks tape drive
Message-ID:  <199701062048.NAA12457@phaeton.artisoft.com>
In-Reply-To: <Mutt.19970102040636.andreas@klemm.gtn.com> from "Andreas Klemm" at Jan 2, 97 04:06:36 am

next in thread | previous in thread | raw e-mail | index | archive | help
> > options	BOUNCE_BUFFERS		#include support for DMA bounce buffers
> 
> You only need this option when using SCSI DMA controllers like the
> AHA 1542B, which only could address 16 MB address space and only in
> the case when having more than 16MB of system memory.
> 
> It might be the case, that this bounce buffering brings your system's
> performance down.

No.

The BOUNCE_BUFFERS option is to force the use of bounce buffers
even in the case that you don't need bounce buffers.  For ISA bus
master controllers, the use of bounce buffers is automatic and
*can not be turned off* if you have more than 16M of memory.

The case where you need bounce buffers, but the machine does not
detect the fact is limited to NiCE EISA motherboards with the HiNT
chipset, which (in violation of the EISA standard) do not decode
the upper address lines for EISA bus master controllers, AND you
have more than 16M.

Because the HiNT chipset was so terrible, NiCE went under and was
sold off several times, so there are several vendors with HiNT
chipset systems.  As far as I know, the HiNT chipset is only
present in a few older EISA or EISA/ISA systems.


Typically, you will not need to enable the option, ever.


FreeBSD should really auto-detect this problem by:

1)	allocate a buffer in physical memory below 16M such that
	another buffer can be allocated in the physical memory
	above 16M such that a decode wrap could be detected.  Ie:
	if you have 24M, then you will need to allocate a buffer
	in the first 8M, and then in the 8M above 16M.
2)	allocate the other buffer above 16M.
3)	Do a read into the lower buffer from a given disk sector
	(sector 0 is good)
4)	XOR the lower buffer with some value
5)	copy the lower buffer to the higher buffer
6)	Do a read into the higher buffer from the same disk sector
7)	Note whether the upper or lower buffer changes; if the
	lower buffer changes, then the chipset is broken (ie: it
	is a HiNT chipset or one of the manufacturers who bought
	out and relabelled NiCE)
8)	Turn on bouncing unconditionally if you have a bogus chipset


Note: There are similar, but vastly more complicated, algorithms
necessary to detect a bad Cyrix (TI mask) processor, which does not
do a cache invalidate/update as a result of bus master DMA modifying
data which is cached, and there are similar problems with VLB machines
and machines with the Saturn I/Mercury I Intel chipsets, and for 3 or
more PCI bus masters for all Mercury/Saturn/Neptune chipsets of any
vintage...

Another problem is that some VLB cards are detected as if they were
EISA cards.  EISA will always do the cache update (unless the chipset
is bogus... see above), but VLB slots will not generate the cache
update request unless the VLB card is in a master slot.  So a real
detect would seperate these drivers into distinct "VLB" and "EISA"
(the case in point are Adaptec VLB controllers that export an EISA
identification because they use the same chipset on the card) and
would also need to BINVD the crap out of everything after turning off
processor specific caches in order to be sure that what's seen is
not just a good cache image of bad data (step #7 may show no buffer
change at all in that case).

Fixing everything in software is possible, but would need a lot of
work to detect the difference between a DMA range wrap error, a cache
bug, and a cache error.  For instance, some processors, like Cyrix,
do not honor the non-cacheable bit like they should, so you should
turn caching off, or do a BINVD after a DMA has completed.  For others,
simply marking all bounce buffer buffers as non-cacheable would suffice.

To do any of these detects, however, would require some hard entry
points into all bus master DMA drivers for use by the detection
routines.

[ ... ]

> But I think BOUNCE_BUFFERS might be the real culprit...

BOUNCE_BUFFERS is probably *not* the culprit; it *could* result in a
slowdown in *certain* very special circumstances.  The ones he described
are not them, however.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199701062048.NAA12457>