Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 6 Dec 2007 19:14:29 -0800
From:      "Len Gross" <sandiegobiker@gmail.com>
To:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   TDMA / Interrupts / Pre-emptible
Message-ID:  <27cb3ada0712061914g4aff5a7eq7d5cc64ba3d493ed@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
I have built a "user land" prototype of a custom network protocol for an RF
network.  It is based on Netgraph and using Ethernet rather than real RF.

Eventually, all the code will go into a special piece of hardware, but the
first hardware really will look like an Ethernet card that puts messages out
N microsends after they are put into its memory. Since the protocol employs
some TimeDivisionMultipleAccess (TDMA), "precise" feeding of the board is
important.

In "userland" I seem to have about 1 ms of "delay"/variability from when I
schedule a timer and when it wakes up a thread.  I think this is pretty much
expected behavior and is fine for algorithm testing.

When I move my userland code to "driver/kernel-land" and set a timer to send
a packet to some hardware how much delay / variability will I see in that
timer?  I think the question is more/less equivalent to the pre-emptibility
of driver code and interrupts in general.

(If this should go to another forum, please advise.)

Thanks in advance.

--Len



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?27cb3ada0712061914g4aff5a7eq7d5cc64ba3d493ed>