Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 23 Aug 1997 11:21:18 +0200
From:      Stefan Esser <se@FreeBSD.ORG>
To:        Simon Shapiro <Shimon@i-connect.net>
Cc:        Terry Lambert <terry@lambert.org>, tom@sdf.com, hackers@FreeBSD.ORG, mrcpu@cdsnet.net, nate@mt.sri.com, Satoshi Asami <asami@cs.berkeley.edu>
Subject:   Re: Final request for help with release.  (DPT boot floppy)
Message-ID:  <19970823112118.12676@mi.uni-koeln.de>
In-Reply-To: <XFMail.970822114746.Shimon@i-Connect.Net>; from Simon Shapiro on Fri, Aug 22, 1997 at 11:47:46AM -0700
References:  <199708221807.LAA26407@phaeton.artisoft.com> <XFMail.970822114746.Shimon@i-Connect.Net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Aug 22, Simon Shapiro <Shimon@i-connect.net> wrote:
> You are right.  We had here a terrible time with PCI bridges.  Some will
> deliver interrupts before completion of DMA, some will lose interrupts,
> some will corrupt DMA transfers.  Real party.  Not only Intel but (for
> sure) the older DEC bridge (somethign with 50 in the name, vs. 51).

Well, the complications caused by PCI bridges
had not really been understood in all detail,
when the first bridges appeared. You can expect
any PPB bridge that does not conform to PCI 2.1
to be sensitive to certain situations and to 
cause a lockup or data corruption, then. (Have 
a look at pages 99, 243 and 448 of PCI System
Architecture published by Addison Wesley for
three deadlock scenarios. Don't remember where
the data corruption problem is explained, but 
it is caused by the fast, that a PCI device 
may issue a "retry", if it can't deliver the
requested data immediately, and if there is
another request (from a different bus-master),
that second request may receive the data meant
to be sent to the first one, in very a certain
situation.)
Revision 2.1 of the PCI specification contains 
that information, too ...

Back to your points about IRQs being delivered
before the end of a DMA:

PCI had certain design objectives and met them.
Those had to do with low power consumption of 
bus drivers, required for single chip solutions,
for example. But the price paid is the limit of
only a few slots per bus segment. The solution
is the tree structure of buses connected by PCI
to PCI bridges.

But PCI to PCI bridges cause latencies, and you
will be able to measure them easily when doing
single memory accesses in a loop (eg. read the 
same address over and over).

In order to hide those latencies, write buffers
have been put in all PCI bridges. What you see,
if an IRQ is delivered before the end of the DMA,
is that the transfer has really ended as seen by
the bus-master device: It has sent out all data.

But that data may still be in some FIFO on its 
way to system memory, and there is no coherency
check between those FIFOs and memory or the CPU
cache.

The PCI 2.1 check requires a device driver to 
enforce coherency be reading from any address
mapped into the device that caused the request.

This works, because the PPB(s) between the CPU
and the device are require to deliver data in 
order. But it only works, of the CPU and system
memory controller are tightly coupled. Else there
might be a different data path between the device
and system memory, and ordering rules applied to
both of them seperately.

There is another option that is open to device
designers: The device may read from the last
memory location written, before it issues an 
interrupt. The read will not complete, before 
the write-buffers of all intervening PPBs have
been flushed.

But a PCI device interrupt handler has to take
shared interrupts into account, and will read
some interrupt status register of a PCI device
as its first action, anyway (and will back out,
if the interrupt was caused by some other device).
And that read will effectively enforce DMA data 
to have made it into system RAM ...

The DEC 21050 PCI to PCI bridge was one of the
first PPBs in wide use, and it was the device
that made the shortcomings of the PCI 2.0 spec
obvious. Later DEC parts (and recent PPBs from
other manufacturers) are built according to 
revision 2.1 of the PCI specification, and will
not suffer from the defects of the early designs.

There is no (software) way to work around the 
design flaws of PCI 2.0 with regard to PCI to 
PCI bridges. They won't hurt the typical work
station user, but will become obvious on servers 
with multiple bus-master controllers on seperate
buses ...

Regards, STefan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19970823112118.12676>