Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 24 Apr 2004 18:11:40 +0200
From:      Andre Oppermann <andre@freebsd.org>
To:        David Burns <david.burns@dugeem.net>
Cc:        net@freebsd.org
Subject:   Re: fast ethernet driver MII phy serial clock rates
Message-ID:  <408A91BC.46A0D95F@freebsd.org>
References:  <408A160F.4090703@dugeem.net>

next in thread | previous in thread | raw e-mail | index | archive | help
David Burns wrote:
> 
> Hello all,
> 
> It appears that quite a few of the "el cheapo" hardware Fast Ethernet
> drivers (at least rl, sis, ste, vr, wb - these are just the ones I found
> in /usr/src/sys/pci) have added DELAY(1) statements around MII serial
> clock ops which will result in a max Management Data Clock (MDC)
> frequency of 500kHz for the serial management interface. Which means
> that a mii_readreg (or writereg) operation will take a minimum of 128?s
> (64?s for mii_sync + 64?s for data read/write). During which time the
> driver is locked.
> 
> NB this assumes that a DELAY(1) is really a delay of 1?s! Which I don't
> think it is ... :-(
> 
> However many Fast Ethernet (ie 100Mb/s) PHYs appear to specify a maximum
> MDC rate of 2.5MHz.
> 
> Whilst at first this appears harmless - the mii_readreg & mii_writereg
> routines are periodically called by MII bus functions every second:
> - With autoneg on there are around 7 mii register ops (0.9ms total)
> - With autoneg off there are around 3 mii register ops (0.4ms total)
> 
> The serial management access bits are set/cleared via various macros
> (eg. CLRBIT/SETBIT). Generally a clock bit operation consists of a
> CSR_READ & CSR_WRITE which are of course PCI read & write operations
> with minimum clock times of 4 cycles and 3 cycles respectively - or 210
> nanoseconds per half cycle (@33MHz) which is a bit slower than 2.5MHz!
> Of course this assumes PCI 33MHz - which is all this hardware will work
> with.
> 
> So I'd like to propose that these DELAY() statements be removed if
> testing results are okay. I believe this has already been done with the
> xl driver some time ago...
> 
> For verification I made this change on the ste v1.58 driver and it
> worked fine - and has resulted in 5-10% network performance
> improvements. Next up I will test the vr driver.
> 
> If needs be I can open a PR for this but wanted some feedback first from
> others who may have previously worked on the driver MII code.

This is a very interesting observation.  I've just worked my way through
the MII code to add link state notification to the routing socket and had
to remove a couple of return(0) when the link is up to break so the later
status function can read the MII and announce the state change if neccessary.
Based on your explanation this seems to be a regression and I will look at
how to work around this.

Do you have any idea how to make the MII access faster or to get some sort
of async notification from the hardware when the link state changes so we
don't have to poll every second?

-- 
Andre



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?408A91BC.46A0D95F>