Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Mar 1996 13:54:20 +1100
From:      Bruce Evans <bde@zeta.org.au>
To:        jkh@time.cdrom.com, lehey.pad@sni.de
Cc:        bde@freebsd.org, hackers@freebsd.org, pst@Shockwave.COM
Subject:   Re: kgdb / remote gdb of the kernel?
Message-ID:  <199603260254.NAA15825@godzilla.zeta.org.au>

next in thread | raw e-mail | index | archive | help
>> and frankly I'm not even sure how it would work while
>> single-stepping a kernel - getting a packet off the wire and
>> ...
>...
>One obvious approach would be to leave interrupts enabled but change
>all the vectors on entering the debugger, such that most interrupts
>get ignored, and the ethernet interrupt goes to a special,
>stripped-down driver.  Note also that the remote debug interface is in
>the kernel, and not a user process.

Interrupts should be disabled in the debugger.  Currently they are only
disabled if they are disabled when the debugger is entered.  The
debugger must be prepared to handle this case without using interrupts,
so it may as well handle all cases without using interrupts.  Interrupts
shouldn't be enabled anyway, because they might interfere with
debugging.  Currently they are handled sloppily by not doing anything
special.  This has the benefit of keeping the clocks running while
you're debugging unless you're debugging a clock interrupt handler or
anything else running at a high cpl.  The debugger should be prepared to
restore the clocks when in this case, so it may as well restore the
clocks in all cases and run with clock interrupts disabled...  Restoring
the clocks is quite complicated.  See apm for how to do it wrong.

Ethernet is harder to support than serial because the hardware interface
is more complicated and you would have to write special drivers for about
20 cards to get reasonable coverage.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199603260254.NAA15825>