Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Jun 1996 09:49:24 -0700
From:      bmah@cs.berkeley.edu (Bruce A. Mah)
To:        "Ron G. Minnich" <rminnich@sarnoff.com>
Cc:        hackers@freebsd.org
Subject:   Re: Frame relay and ATM support: virtual interface per vpi? 
Message-ID:  <199606281649.JAA28578@premise.CS.Berkeley.EDU>
In-Reply-To: Your message of "Fri, 28 Jun 1996 11:45:14 EDT." <Pine.SUN.3.91.960628100148.16297B-100000@terra> 

next in thread | previous in thread | raw e-mail | index | archive | help
"Ron G. Minnich" writes:

> Here's the picture the way things are now, roughly: Using a file
> descriptor, clients talk to a stream service provided by TCP.  Kernel
> uses fd to get info, then passes it to the socket layer. Socket layer
> goes via protosw to tcp.  TCP does its thing, using stored per-stream
> info, drops to ip_output.  ip_output finds the interface via route if
> needed, passes mbufs to the interface.

[snip]

One comment here.  It's my understanding that ip_output() can take a 
route pointer as an argument, so that a transport layer protocol (e.g. 
TCP) can pass down a cached route to be used.  So if TCP were VC-aware, 
with the VIF-per-VC model, it could just pass down a (struct route *) 
for the correct VIF.  A violation of layering, I know, but then so is a 
lot of this IP-over-ATM business.

> Ok, what about ATM? well, there's a number of choices. I'll enumerate 
> the ones we have for MINI, which is basicallyh one more than we have 
> for everything else.
> 1) Use tcp. Same as above. Kind of weird, because TCP is supporting
>    a multiple reliable byte stream model for applications, and using 
>    a multiple unreliable byte stream  model from the interface. 
>    Doesn't sound
>    as weird as it starts to look when you start to jam the driver in. 

Well I guess the main reason it's so weird is that between TCP and ATM 
(both connection-oriented) you have this connectionless beastie called 
IP.  So you end up with some replication of work (well, my XUNET driver 
did, anyways).

> 2) VC access by some form of 'open("/dev/atm", ); and other magic.
>    application does writes and reads to FDs, which go to the interface. 
>    Making this work is "interesting" on some interfaces. Making it not 
>    impact TCP has got to somehow be more interesting. 
> 3) Direct access to a virtual interface. No OS involvement. Application
>    diddles CSRs directly. Application might even run TCP internally. 

> Now, if (1) were the only thing you want to do, you could probably hide 
> the VC as part of the routing table (Anyone? am i nuts?). 

If you are, I am.  See first comment.  :-)

> If you want to 
> do (2) as well, you could always use something like DLPI (AGGGGGHHHHH! 
> NO! NO! NO!). But (3) sort of breaks the rules. Applications can use 
> that interface any way they want. They can run TCP half the time, or 
> not. They may send MPEG. Either way, the OS has no way of knowing what's 
> happening, other than being able to see the raw amount of data being 
> sent. Which, by implication, means that the kernel TCP also much less
> knowledge of what's really happening on the interface. 

But what kind of knowledge does the kernel TCP have to have, other than 
to make sure that it doesn't try to grab VCs that are already in use?  
So I guess you have to have some kind of arbitration for the virtual 
interfaces.

> So the choices for ATM VC management: 
> 1) as added info for routes (wouldn't this cover the TCP case?). 
>    fails for mini in both indirect and direct VC access modes.
> 2) DLPI-like mode. But then you need programs or 'sysctl' to traverse the 
>    kernel    structures. DLPI-like interfaces make me ill.
> 3) interface-per-active VC. This was my first cut on IRIX. After dennis'
>    comments I'm rethinking it.
> 4) /dev/ node per active VC. or some sort of devfs entry. This would
>    be nice in many ways. 
> 5) /proc-like setup. Sort of like the plan9 'net' file system. This may 
>    be the way to go. I like it more and more, compared to the alternative.
>    Good paper on this in the plan9 archives.

6)  New address/protocol family.  You open up a stream socket for your 
ATM VC.  This is basically what XUNET did, using a user-level library 
for communication with a signalling process.  Is this too close to #2 
above?

7)  Bury all of the ATM VC selection, multiplexing, etc. in the device 
driver.  Application (as well as everything at IP and above) are 
totally oblivious to virtual circuits.  Note that if you want to build 
a *router*, which needs to manage VCs for flows that it is not the 
endpoint for, this is the only way to go.

Please tell me if I'm confusing the issues here...I haven't had my 
morning dose of caffeine yet.  :-)

Bruce.







Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199606281649.JAA28578>