From owner-freebsd-arm@FreeBSD.ORG Mon Feb 24 13:26:19 2014 Return-Path: Delivered-To: freebsd-arm@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 7BDADD14; Mon, 24 Feb 2014 13:26:19 +0000 (UTC) Received: from zibbi.meraka.csir.co.za (zibbi.meraka.csir.co.za [IPv6:2001:4200:7000:2::1]) by mx1.freebsd.org (Postfix) with ESMTP id 3841C15A4; Mon, 24 Feb 2014 13:26:18 +0000 (UTC) Received: by zibbi.meraka.csir.co.za (Postfix, from userid 3973) id 09260B836; Mon, 24 Feb 2014 15:26:15 +0200 (SAST) Date: Mon, 24 Feb 2014 15:26:14 +0200 From: John Hay To: Ian Lepore Subject: Re: status of AVILA and CAMBRIA code Message-ID: <20140224132614.GA31984@zibbi.meraka.csir.co.za> References: <20140219105934.GA74731@zibbi.meraka.csir.co.za> <20140219172938.GH34851@funkthat.com> <20140221130530.GA202@zibbi.meraka.csir.co.za> <1393001249.1145.98.camel@revolution.hippie.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1393001249.1145.98.camel@revolution.hippie.lan> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-arm@FreeBSD.org X-BeenThere: freebsd-arm@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Porting FreeBSD to ARM processors." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Feb 2014 13:26:19 -0000 Hi Ian, On Fri, Feb 21, 2014 at 09:47:29AM -0700, Ian Lepore wrote: > On Fri, 2014-02-21 at 15:05 +0200, John Hay wrote: > > On Wed, Feb 19, 2014 at 09:29:38AM -0800, John-Mark Gurney wrote: > > > John Hay wrote this message on Wed, Feb 19, 2014 at 12:59 +0200: > > > > What is the status of AVILA and CAMBRIA builds in our tree? Have anybody > > > > had success recently? From the lists I can see that other people have > > > > also asked in September 2013, but I cannot figure out if there was any > > > > successes at that stage. :-) > > ... > > > > > > > > Somewhere along the line the ethernet npe0 device also broke, writing > > > > "npe0: npestart_locked: too many fragments 0". But I did not test the > > > > npe0 device with every build and only realised it this morning, so I > > > > do not know where it broke. :-) > > > > > > Not sure about these issues... > > > > Ok, I found the place. It is svn 246713 by kib: > > > > ########### > > Reform the busdma API so that new types may be added without modifying > > every architecture's busdma_machdep.c. It is done by unifying the > > bus_dmamap_load_buffer() routines so that they may be called from MI > > code. The MD busdma is then given a chance to do any final processing > > in the complete() callback. > > > > The cam changes unify the bus_dmamap_load* handling in cam drivers. > > > > The arm and mips implementations are updated to track virtual > > addresses for sync(). Previously this was done in a type specific > > way. Now it is done in a generic way by recording the list of > > virtuals in the map. > > > > Submitted by: jeff (sponsored by EMC/Isilon) > > Reviewed by: kan (previous version), scottl, > > mjacob (isp(4), no objections for target mode changes) > > Discussed with: ian (arm changes) > > Tested by: marius (sparc64), mips (jmallet), isci(4) on x86 (jharris), > > amd64 (Fabian Keil > > ########### > > > > After that tx packets will cause this message: > > npe0: npestart_locked: too many fragments. > > > > Then updating to 246881 by ian: > > > > ############# > > In _bus_dmamap_addseg(), the return value must be zero for error, or the size > > actually added to the segment (possibly smaller than the requested size if > > boundary crossings had to be avoided). > > ############# > > > > This makes it a bit better in that some packets seem to go through. It > > looks like 3 out of 4 will go out and the forth will cause the same > > message as above. > > > > I have added a printf just above bus_dmamap_load_mbuf_sg() in > > npestart_locked() to show some of the mbuf values: > > > > ############ > > npe0: npestart_locked: m_len 42, data 0xc0d3dcd6, next 0 > > [...] > > ############ > > > > Any ideas? > > > > John > > I can't see the path through the busdma code that leads to that result. > It looks like there are two ways the mapping can fail, maybe it would > help to know which path it's taking. In arm/busdma_machdep.c the two > error paths out of the mapping loop are at lines 1066 and 1077, could > you try putting printfs at those locations so we know which case is > happening? Maybe printing some of the values involved with taking those > exits would be helpful too. > > Maybe also check for map->sync_count being non-zero on entry to > _bus_dmamap_load_buffer() and print something if it is. The printfs you > did earlier make it look like there's never actually a second mbuf > chained off the first (which makes sense for such small packets), I > think that number should always be zero on entry because the outer loop > in kern/subr_bus_dma.c should only ever run once. Ok, here is the patch I used to add printfs to arm/busdma_machdep.c ################### Index: arm/busdma_machdep.c =================================================================== --- arm/busdma_machdep.c (revision 246713) +++ arm/busdma_machdep.c (working copy) @@ -1008,6 +1008,8 @@ vm_offset_t vaddr = (vm_offset_t)buf; int error = 0; + if (map->sync_count != 0) + printf("_bus_dmamap_load_buffer: map->sync_count %d\n", map->sync_count); if (segs == NULL) segs = dmat->segments; if ((flags & BUS_DMA_LOAD_MBUF) != 0) @@ -1052,8 +1054,10 @@ sl = &map->slist[map->sync_count - 1]; if (map->sync_count == 0 || vaddr != sl->vaddr + sl->datacount) { - if (++map->sync_count > dmat->nsegments) + if (++map->sync_count > dmat->nsegments) { + printf("_bus_dmamap_load_buffer: map->sync_count %d, dmat->nsegments %d\n", map->sync_count, dmat->nsegments); goto cleanup; + } sl++; sl->vaddr = vaddr; sl->datacount = sgsize; @@ -1064,6 +1068,8 @@ sgsize = _bus_dmamap_addseg(dmat, map, curaddr, sgsize, segs, segp); if (sgsize == 0) + printf("_bus_dmamap_load_buffer: sgsize == 0, dmat flags %x\n", dmat->flags); + if (sgsize == 0) break; vaddr += sgsize; buflen -= sgsize; @@ -1075,6 +1081,7 @@ */ if (buflen != 0) { _bus_dmamap_unload(dmat, map); + printf("_bus_dmamap_load_buffer: buflen %jd\n", (intmax_t)buflen); return (EFBIG); /* XXX better return value here? */ } return (0); ################### The result looks like this. With printfs in _bus_dmamap_load_buffer(), we also see messages for the receive side. ################### ping 146.64.5.1 PING 146.64.5.1 (146.64.5.1): 56 data bytes npe0: npestart_locked: m_len 98, data 0xc0d3ed66, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=0 ttl=64 time=9.089 ms npe0: npestart_locked: m_len 98, data 0xc0d3ea66, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=1 ttl=64 time=9.262 ms npe0: npestart_locked: m_len 98, data 0xc0d3e766, next 0 _bus_dmamap_load_buffer: map->sync_count 2 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=2 ttl=64 time=9.291 ms npe0: npestart_locked: m_len 98, data 0xc0d3e466, next 0 _bus_dmamap_load_buffer: map->sync_count 3 _bus_dmamap_load_buffer: map->sync_count 4, dmat->nsegments 3 _bus_dmamap_load_buffer: buflen 98 npe0: npestart_locked: too many fragments 0 npe0: npestart_locked: m_len 98, data 0xc0d3e366, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=4 ttl=64 time=5.494 ms npe0: npestart_locked: m_len 98, data 0xc0d3e066, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=5 ttl=64 time=9.294 ms npe0: npestart_locked: m_len 98, data 0xc0d3dc66, next 0 _bus_dmamap_load_buffer: map->sync_count 2 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=6 ttl=64 time=9.323 ms npe0: npestart_locked: m_len 98, data 0xc0d3d966, next 0 _bus_dmamap_load_buffer: map->sync_count 3 _bus_dmamap_load_buffer: map->sync_count 4, dmat->nsegments 3 _bus_dmamap_load_buffer: buflen 98 npe0: npestart_locked: too many fragments 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 npe0: npestart_locked: m_len 98, data 0xc0d3d866, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=8 ttl=64 time=5.509 ms npe0: npestart_locked: m_len 98, data 0xc0d3d566, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=9 ttl=64 time=9.316 ms npe0: npestart_locked: m_len 98, data 0xc0d3d266, next 0 _bus_dmamap_load_buffer: map->sync_count 2 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=10 ttl=64 time=9.266 ms npe0: npestart_locked: m_len 98, data 0xc0d3d166, next 0 _bus_dmamap_load_buffer: map->sync_count 3 _bus_dmamap_load_buffer: map->sync_count 4, dmat->nsegments 3 _bus_dmamap_load_buffer: buflen 98 npe0: npestart_locked: too many fragments 0 npe0: npestart_locked: m_len 98, data 0xc0d3d066, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=12 ttl=64 time=5.469 ms npe0: npestart_locked: m_len 98, data 0xc0d3d366, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=13 ttl=64 time=9.276 ms npe0: npestart_locked: m_len 98, data 0xc0d3d666, next 0 _bus_dmamap_load_buffer: map->sync_count 2 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=14 ttl=64 time=9.302 ms npe0: npestart_locked: m_len 98, data 0xc0d3db66, next 0 _bus_dmamap_load_buffer: map->sync_count 3 _bus_dmamap_load_buffer: map->sync_count 4, dmat->nsegments 3 _bus_dmamap_load_buffer: buflen 98 npe0: npestart_locked: too many fragments 0 npe0: npestart_locked: m_len 98, data 0xc0d3da66, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=16 ttl=64 time=5.484 ms npe0: npestart_locked: m_len 98, data 0xc0d3dd66, next 0 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 1 _bus_dmamap_load_buffer: map->sync_count 2, dmat->nsegments 1 _bus_dmamap_load_buffer: buflen 1536 64 bytes from 146.64.5.1: icmp_seq=17 ttl=64 time=9.306 ms ################### Regards John -- John Hay -- jhay@meraka.csir.co.za / jhay@meraka.org.za