Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 09 Feb 2006 08:01:40 -0800
From:      Sam Leffler <sam@errno.com>
To:        Geir Egeland <geir.egeland@gmail.com>
Cc:        freebsd-net@freebsd.org, freebsd-questions@freebsd.org
Subject:   Re: IEEE 802.11 Wireless Multimedia Extension (WME) and raw sockets
Message-ID:  <43EB6764.7040900@errno.com>
In-Reply-To: <43EB3640.8090906@gmail.com>
References:  <43EB3640.8090906@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Geir Egeland wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi,
> I've been playing around with WME to test various network performance,
> and come across a problem that I can't quite understand.
> I have an application that generates traffic with various TOS
> (BACKGROUND, BEST EFFORT, VOICE, VIDEO). It uses raw sockets to transmit
> the IP packets. This all works well if ip->ip_len is less than 192
> bytes. If ip_>ip_len is larger than 192, the call to ieee80211_classify
> (/usr/src/sys/net80211/ieee80211_output.c) will classify the packet as
> "BEST EFFORT" no matter what value my application set the TOS  field as.
> 
> Debugging ieee80211_classify, I see that both ip->ip_tos and ip->ip_len
> are set to zero when a I send a packet with  ip->ip_len larger than 192
> bytes.
> Sniffing the network, I can see my packets have the correct TOS and
> length, but they don't get the correct WME classification.
> 
> 
> - -------------ieee80211_output.c(iee80211_classify)------------
>         if (eh->ether_type == htons(ETHERTYPE_IP)) {
>                 const struct ip *ip = (struct ip *)
>                         (mtod(m, u_int8_t *) + sizeof (*eh));
>                 /*
>                  * IP frame, map the TOS field.
>                  */
> //added by myself
> 	printf("IP_TOS: %d, IP_LEN: %d\n",ip->ip_tos,ntohl(ip->ip_len));
> //end
>                 switch (ip->ip_tos) {
>                 case 0x08:
>                 case 0x20:
>                         d_wme_ac = WME_AC_BK;   /* background */
>                         break;
>                 case 0x28:
>                 case 0xa0:
>                         d_wme_ac = WME_AC_VI;   /* video */
>                         break;
>                 case 0x30:                      /* voice */
>                 case 0xe0:
>                 case 0x88:                      /* XXX UPSD */
>                 case 0xb8:
>                         d_wme_ac = WME_AC_VO;
>                         break;
>                 default:
>                         d_wme_ac = WME_AC_BE;
>                         break;
>                 }
> 
> - -----------------------------------------------------
> 
> When I use SOCK_DGRAM socket instead of raw, everything works fine.
> 
> I use FreeBSD 6.0-STABLE and my wireless NIC uses an atheros chipset.
> 
> Has anyone got an idea what is going on ?

I'll check but the raw socket path must not have the ip header in the 
expected spot in the mbuf.  Most of my testing has been done with a 
modified version of netperf that slaps a TOS on the socket based on a 
command line argument so only UDP and TCP (not raw) traffic.

Ideally the 802.11 layer should not be doing classification; packets 
should be tagged and the 802.11 layer then does the mapping according to 
the standard.  Groveling around inside packets to extract stuff like 
this is evil.

	Sam



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?43EB6764.7040900>