Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 15 Dec 1998 08:26:39 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        jmb@FreeBSD.ORG (Jonathan M. Bresler)
Cc:        johan@granlund.nu, julian@whistle.com, phk@FreeBSD.ORG, lars@akerlings.t.se, current@FreeBSD.ORG, isdn@FreeBSD.ORG
Subject:   Re: if_sppp is BROKEN!!!
Message-ID:  <199812150826.BAA16811@usr06.primenet.com>
In-Reply-To: <199812142307.PAA27390@hub.freebsd.org> from "Jonathan M. Bresler" at Dec 14, 98 03:07:41 pm

next in thread | previous in thread | raw e-mail | index | archive | help
>   the idea of streams is wonderful, the realization is costly.  each
>   layer added (or module pushed) slows down processing and hurts
>   throughput.  ritchie developed streams for serial, if i remember
>   correctly.  streams was then applied to networks.  there is an RFC
>   about layering being bad for networking and the relative performance
>   of NIT vs BPF prove the case.

The main drawback in streams is that, even in ideal conditions,
when you get two or more layers interacting (that is, when you
get one stack element between the "top" and "bottom", other than
"null"), you have to take a context switch to propagate the data
in at least one direction, if not both.

Some hacks (at Novell) ran the push up to the top of the stack at
interrupt level to try to avoid this, but of course, it failed
rather spectacularly if you added things like MUX modules on
top of IPX on top of SPX.

In general, the fact that the getmsg/putmsg had to run in a
process context, and that that context was given over by any
process going into or coming out of a system call, as well as 
ther switchpoints in the kernel contributed to the general idea
that STREAMS was a pig.

When UnixWare went from a monolithic driver implemetnation (like
what's in FreeBSD now) to NetWare drivers running under UNIX using
a shim layer, there was an additional 35% latency, overall, that
was introduced into a three module stack.

I think that netgraph resolves some, but not all, of these issues.

Ideally, you would want an operation to propagate the full stack,
up or down, as a result of one operation.


A long time ago (mid 1994), I did a full SVR4-style priority banded
STREAMS implementation for FreeBSD as part of an inside "skunkworks"
project to port NWU (NetWare for UNIX) to FreeBSD and Linux.  I
resolved a number of these issues internally by creating a high
priority kernel process to push things up and down the stack; sort
of a soft interrupt handler, if you will.

A worker process can go a long way to resolving the streams latency
issues; it also didn't hurt that FreeBSD monolithic network drivers
were faster than the ones in UnixWare.  8-).  I bet that if these
same issues were measured in netgraph, a similar tactic would be
sufficient to resolve the vast majority of cases (one exception being
if someone declared a network task "real time", and no one bothered
to implement priority lending to stave off inversion; but that's
pilot error).


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-isdn" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199812150826.BAA16811>