Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 21 May 2014 15:28:55 -0600
From:      Ian Lepore <ian@FreeBSD.org>
To:        Warner Losh <imp@bsdimp.com>
Cc:        freebsd-arm <freebsd-arm@FreeBSD.org>
Subject:   Re: BBB MMC / SD detection instability with U-Boot 2014.04 (CPU 1GHz)
Message-ID:  <1400707735.1152.194.camel@revolution.hippie.lan>
In-Reply-To: <E85AB625-9954-471D-B9D8-B614F9794487@bsdimp.com>
References:  <537A050E.3040804@hot.ee> <537AB550.2090401@hot.ee> <537AB675.1020006@hot.ee> <024F43EF-E299-413E-AE42-2507AEDD0886@bsdimp.com> <CADH-AwEYOYmxbP8zBWOXutR9GJDBsYP8uo=yu37fT49rJdhYzg@mail.gmail.com> <537ACDB2.9080808@hot.ee> <E85AB625-9954-471D-B9D8-B614F9794487@bsdimp.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 2014-05-19 at 22:07 -0600, Warner Losh wrote:
> On May 19, 2014, at 9:36 PM, Sulev-Madis Silber (ketas) <madis555@hot.e=
e> wrote:
>=20
> > On 2014-05-20 05:39, Winston Smith wrote:
> >> On Mon, May 19, 2014 at 10:28 PM, Warner Losh <imp@bsdimp.com> wrote=
:
> >>> Wow! That=92s a lot of added 10ms delays=85  Do we have a theory of=
 the crime
> >>> for why they are needed? Usually they suggest to me that we=92re do=
ing something
> >>> wrong (either not checking the right bits in the bridge, having a f=
ixed retry count
> >>> rather than a timed limit and having some bridges fail more slowly =
than others
> >>> so the delays are effecting the same thing).
> >>=20
> >> It's a good start (since the BBB is really flakey at 1Ghz), but yes,
> >> more delays aren't good!
> >>=20
> >> For what it's worth, I'm working in parallel with both FreeBSD and
> >> Debian Wheezy images on the BBB, and it is quite apparent that the B=
BB
> >> running FreeBSD is *much* slower to boot than the BBB running Debian=
;
> >> which currently boots to the login prompt in about 15 seconds from
> >> power up.  FreeBSD has a 15-20 second delay just to detect the eMMC,
> >> let alone everything else.
> >>=20
> >> Comparatively, my x64 FreeBSD VM boots much more quickly than my Ubu=
ntu x64 VM.
> >>=20
> >> -W.
> >>=20
> >=20
> >=20
> > "really flakey" sounds like "unstable, panics 1000 times a day". I do=
n't
> > see any of that here (as of 11.0-CURRENT r266442).
> >=20
> > Boot, hmm... yea, 1min (just measured) to fully boot up and connect t=
o
> > server (I'm using ethernet, DHCP, loader boot delay =3D 3, huge Perl
> > program) might be too slow if you have some embedded system which
> > constantly loses power or something... I haven't tried to do any boot
> > time optimizations yet. Compress kernel? Compress userland? Execute
> > something in parallel on init (NOTE: *DON'T* even think about porting
> > Linux init replacements here)? Use rescue-like static binary? Heavily
> > customize / patch kernel? Use own init? Use rootfs inside kernel?
> > Actually I guess many people might think like me... "HELL, optimizing
> > boot time of 1min?! I have more important tasks to do than this=94.
>=20
> Make MMC faster, and a lot of this will go away. When I was doing Atmel=
,
> I got more milage out of optimizing the I/O path for slow boots than I =
did
> for just about anything else.
>=20

It's not the actual MMC IO that's slow, it's the card detection and init
stuff.  It's the age-old problem... if you have no card in slot 0 you
really don't want to wait for 10 seconds worth of retries with timeouts.
On the other hand, if your favorite data lives on that card, you want
the system to try as hard as possible to get at it, not give up too
early.

I think the real problem is that neither of the mmc/sd drivers we've got
for the TI hardware is very good.  I created the ti_sdhci glue layer
using the original ti_mmchs driver as a guide, so it has all the
original's problems plus any I introduced.

I had a quick glance at the linux driver yesterday, and they're dealing
with a variety of things we don't, such as the MMC "80 clock cycles for
init" stuff that may be quite significant for the problems people are
seeing now.  They also do some tricky-looking things with calculating
the command and data timeouts that's different than anything we do.

I think the driver could use a full-time owner/caretaker, and I don't
have the time to be that person right now, although I could certainly
help someone who had good driver skills and just needed info on its
current state and history, and our general mmc/sd layers.

> Another quick hack: delete all files in /etc/rc.d that aren=92t used.

That works around the poor performance we have with fork/exec on arm.
On armv4 some of that trouble is unavoidable because of hardware
limitations dictating expensive page-tracking stuff in the pmap code.
On armv6 I think it more reflects the poor state of the pmap code, which
is not because the hardware requires it, but because the code is long
overdue for a complete rewrite that throws away all of its v4 heritage.

-- Ian





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1400707735.1152.194.camel>