Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Jan 1996 13:58:55 +0000 ()
From:      Neil Bradley <neil@synthcom.com>
To:        Terry Lambert <terry@lambert.org>
Cc:        Neil Bradley <root@synthcom.com>, terry@lambert.org, hasty@rah.star-gate.com, freebsd-hackers@FreeBSD.org
Subject:   Re: PnP problem...
Message-ID:  <Pine.BSD.3.91.960110133748.3072B-100000@beacon.synthcom.com>
In-Reply-To: <199601101942.MAA15208@phaeton.artisoft.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> > On Tue, 9 Jan 1996, Terry Lambert wrote:
> > ISA Interrupts are not shareable - they're edge triggered.
> Except on multiport serial boards, which have additional interrupt
> decode hardware with latches that can be interrogated to determine
> who has interupt conditions pending.

What I meant was "not shareable between cards" - ;-)

> > > It makese sense that you would have one interrupt per card so you don't
> > > run out between card slots and onboard devices... it's stupid that the
> > > GUS doesn't have an interupt multiplex on board.  You'll have to live

You mean to say that the GUS uses two interrupts for one ISA card? Now 
that is stupid.

> The Intel bridge chips give the ISA priority.  In reality, you want
> to map the ISA bus as a PCI device so you can relocate the "unrelocatable"
> cards on the ISA bus as far as the processors view of their resource
> apetures is concerned, and demux ISA interrupts onto PCI interrupts.

Unfortunately we don't live in a PC world where Unix is king. Microsoft 
is. And there are a lot of vendors who haven't the foggiest idea of a 
clue as to how to properly coexist with other cards. Doing what you said 
above would break all DOS/Windoze drivers/apps very quickly. Of course, 
we live in the great world of FreeBSD where this isn't a problem for us 
programmers. We could remap our standard COM ports wherever we wanted in 
I/O space and to whatever interrupt we decide to use and our drivers are 
built to handle it. Or would be if we could remap those darned ISA 
devices out into oblivion.

> If what you said were true, on an ISA bus on a PCI machine, each slot
> would be permitted to consume the full gamut of available ISA interrupts
> and geneare a single (potentially shared between slots) interrupt per
> ISA slot.

There's nothing preventing an ISA card from sitting on all IRQ lines on 
the ISA bus, or toggling them for that matter. In the case of the Triton 
and Neptune chipsets, the ISA bus is nothing but a bunch of connected 
address/IRQ lines, so there's no differentientation between each ISA slot.

> Then the PCI/ISA bridge logic would let me determine which of the PCI
> intterrupts was triggered by what mapping, and then ask which ISA
> interrupts were pending service in the ISA mapped slot as a PCI device.
> Finally, I could have as many ISA S3 based boards as I wanted, all of
> them thinking they were at d8000 with port 2e8, and map them to a
> different location in real space using the PCI.

This would be quite cool. What we would ideally like to see is each ISA 
slot treated as a completely separate slot from the next, I.E. Not being 
electronically connected. The logic in the bridge chip would be able to 
convert edge triggered interrupts into level, and do the mapping like you 
mention above.

The Intel INCA chip takes care of a lot of this. It had the ability to 
convert edge->level triggered interrupts. We had a problem with on board 
devices screwing up the IRQ lines on the ISA bus (parallel port to be 
exact) because the output drive was set to totem pole. Once we tri-stated 
it from within INCA, things were fine.

> > Back when I designed BIOSes for P5 motherboards, we'd initialize ISA 
> > devices first. We'd start by shutting off all on-board capabilities in 
> > case someone plugged in an off-board IDE, Serial, Video, etc... card. 
> > After it did that, we'd take the existing user's setup and set up 
> > on-board devices. Then EISA. Then, from the pool of I/O, memory, and 
> > interrupts, we'd allocate space for PCI devices. PCI Devices always went 
> > to the end of the heap, because, by PCI's definition, it was a 
> > requirement that they not be fixed in BIOS.
> Sounds like you are the guy for the job.  8-).

I'll be glad to help in any way I can.

> I'd put PCI at the end of the heap from a software perspective because
> they are the most relocatable.

I'd do this:

1) Disable all PNP devices
2) Probe for ISA
3) Obtain EISA information - report conflicts with ISA devices
4) Initialize EISA devices
5) Init PnP devices
6) Init PCI devices
7) Boot system ;-) 

> I'd prefer to ignore motherboards that weren't totally PnP, either by
> slot by intent, or by PCI bridge by mapping order.  Obviously, that's
> not very realistic.  8-).

No, but we'll probably have ISA around for 5 or so years. Unfortunately.

I guess it all boils down to the gibraltarish nature of each bus:

1) ISA non PNP can't be moved by device drivers - ISA First
2) EISA Can sort of be moved by device drivers - EISA second
3) PNP Can be moved moreso than EISA - PNP Third
4) PCI Can be moved - PCI Last

I really wish #1-3 would go away. I can remember how much of a pain it 
was trying to implement all of this in BIOS. EISA Was by far the worst. 
Let me know if I can help.

-->Neil

-------------------------------------------------------------------------------
Synthcom System's homepage:                         http://www.synthcom.com/
Europa Upgrade, Synth patches (D-50, Xpander/Matrix 12), used gear pricelist





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSD.3.91.960110133748.3072B-100000>