Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 05 Oct 2002 15:27:53 +0100
From:      Antony T Curtis <antony.t.curtis@ntlworld.com>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        Nate Lawson <nate@root.org>, David Francheski <davidf@caymas.com>, freebsd-arch@FreeBSD.ORG, freebsd-smp@FreeBSD.org
Subject:   Re: Running independent kernel instances on dual-Xeon/E7500 system
Message-ID:  <3D9EF6E9.9040700@ntlworld.com>
References:  <Pine.BSF.4.21.0210041721250.96201-100000@root.org> <3D9EB0A4.4CD09E20@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help

I'm interested in persueing the idea of creating some form of 
partitioning within one machine.... Kind of like wrapping up as many 
global variables as possible and sharing the memory between them.

Things like netgraph to be used to allow each 'partition' to have its 
own network interface and for communication between them. Admittedly, 
I'm no expert on operating systems but I have been trying to study the 
FreeBSD sources to see if I can do some crude implementation, partly to 
satisfy my own curiosity.

Terry Lambert wrote:
> Nate Lawson wrote:
> 
>>On Fri, 4 Oct 2002, David Francheski wrote:
>>
>>>I have a dual-Xeon processor (with E7500 chipset) motherboard.
>>>Can anybody tell me what the development effort would be to
>>>boot and run two independent copies of the FreeBSD kernel,
>>>one on each Xeon processor?   By this I mean that an SMP
>>>enabled kernel would not be utilized, each kernel would be UP.
>>>
>>>Regards,
>>>David L. Francheski
>>
>>Not possible without another BIOS, PCI bus, and separate memory --
>>i.e. another PC.
> 
> 
> IPL'ing is not the same as "running".  So long as you crafted the
> memory image of the second OS and its page tables, etc., using the
> first processor, there should be no problem running a second copy
> of an OS on an AP, as a result of a START IPI from the BP, after
> the code is crafted.  Thus there is no need for a separate BIOS.
> 
> For running, there are two types of devices which one cares about:
> devices which can be duplicated, and therefore assigned as seperate
> resources, and devices which cannot.  For PCI devices, this breaks
> down to an interrupt routing issue.  There are four PCI interrupts:
> A, B, C, and D.  So long as no device allocated to each processor
> does not share an interrupt, there is no problem.  Thus you do not
> need a separate PCI bus.
> 
> Note: For devices which cannot be shared, but which are required,
> there are two approaches: the device may be virtualized, and then
> access to it contended between the processors, or the device may
> be virtualized in one instance, and accessed via proxy to the other
> processor (e.g. via IPI triggers for IPC).  VMWare operates this
> way for a number of its own devices, which can not be physical
> devices, since they must be shared with the host OS, rather than
> assigned directly to the VMWare "machine", or to the host OS (both
> are available options for many devices).
> 
> The memory can be seperated logically, rather than physically.  In
> fact, one could either use the PAE mode in exclusively 4K page mode,
> or the PSE-36, exclusively in 4M page mode, without significant
> changes to the VM system, to permit motherboards that can handle it
> to wupport up to 4G of physical RAM per CPU, up to 16 CPUs (the
> practical limitations on this due to motherboard availability is 4).
> Thus there is no need for physically seperate memory.  The 4K mode
> would require an additional layer of indirection (Peter Wemm may
> actually have completed some or all of the code necessary for PAE
> use alread), and the 4M (PSE-36) mode would require hacking the
> system to be able to use 4M pages, rather than 4K (mostly, this
> effect the paging paths themselves; you would likely get 2M pages
> (for PAE large pages, which are 2M instead of 4M in size) for use
> in PAE out of this for free, if you went to a "power of two multiple
> of 4K" size parameter for paging operations.
> 
> --
> 
> I've personally considered pursuing the ability to run code seperately,
> though with the same 4G address space, seperated, so as to permit
> running a debugger against a "crashed" FreeBSD "system" running on an
> AP, doing the debugging from the BP, as a hosted system.  The cost
> in labor would be 2-3 months of continuous work, I think... that is
> the estimate I arrived at, when I considered the project previously.
> Doing this certaily beats the cost of buying an ICE to get similar
> capability.
> 
> 
> It would be interesting to see what other people have to say on this,
> other than "can't be done" (not to pick on you in particular, here;
> this is the knee-jerk reaction many people have to things like this).
> 
> -- Terry
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-arch" in the body of the message
> 



-- 
Antony T Curtis BSc     Unix Analyst Programmer
http://homepage.ntlworld.com/antony.t.curtis/


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3D9EF6E9.9040700>