Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 13 Aug 2009 15:14:03 +0000 (UTC)
From:      Lawrence Stewart <lstewart@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-projects@freebsd.org
Subject:   svn commit: r196191 - in projects/tcp_ffcaia2008_8.x/sys: modules modules/siftr netinet
Message-ID:  <200908131514.n7DFE3wb004942@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: lstewart
Date: Thu Aug 13 15:14:02 2009
New Revision: 196191
URL: http://svn.freebsd.org/changeset/base/196191

Log:
  Initial wholesale import of the Statistical Information For TCP Research (SIFTR)
  v1.2.2 kernel module.
  
  SIFTR facilitates TCP related research, development and debugging by providing
  near real-time access to highly detailed kernel information from TCP endpoints.
  The tool can be used to gather data unobtrusively on running systems, making it
  a useful addition to the toolkits of system administrators, developers, and
  researchers alike.
  
  SIFTR was first released in 2007 by James Healy and Lawrence Stewart whilst
  working on the NewTCP research project at Swinburne University's Centre for
  Advanced Internet Architectures, Melbourne, Australia, which was made possible
  in part by a grant from the Cisco University Research Program Fund at Community
  Foundation Silicon Valley. More details are available at:
      http://caia.swin.edu.au/urp/newtcp/
  
  Work on SIFTR v1.2.x was sponsored by the FreeBSD Foundation as part of the
  "Enhancing the FreeBSD TCP Implementation" project 2008-2009. More details are
  available at:
      http://www.freebsdfoundation.org/
      http://caia.swin.edu.au/freebsd/etcp09/
  
  Sponsored by:	FreeBSD Foundation, Cisco Systems

Added:
  projects/tcp_ffcaia2008_8.x/sys/modules/siftr/
  projects/tcp_ffcaia2008_8.x/sys/modules/siftr/Makefile
  projects/tcp_ffcaia2008_8.x/sys/netinet/siftr.c
Modified:
  projects/tcp_ffcaia2008_8.x/sys/modules/Makefile

Modified: projects/tcp_ffcaia2008_8.x/sys/modules/Makefile
==============================================================================
--- projects/tcp_ffcaia2008_8.x/sys/modules/Makefile	Thu Aug 13 15:08:05 2009	(r196190)
+++ projects/tcp_ffcaia2008_8.x/sys/modules/Makefile	Thu Aug 13 15:14:02 2009	(r196191)
@@ -243,6 +243,7 @@ SUBDIR=	${_3dfx} \
 	sdhci \
 	sem \
 	sf \
+	siftr \
 	sis \
 	sk \
 	${_smbfs} \

Added: projects/tcp_ffcaia2008_8.x/sys/modules/siftr/Makefile
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ projects/tcp_ffcaia2008_8.x/sys/modules/siftr/Makefile	Thu Aug 13 15:14:02 2009	(r196191)
@@ -0,0 +1,12 @@
+# $FreeBSD$
+
+.include <bsd.own.mk>
+
+.PATH:  ${.CURDIR}/../../netinet
+KMOD=	siftr
+SRCS=	siftr.c
+
+# Uncomment to add IPv6 support
+#CFLAGS+=-DSIFTR_IPV6
+
+.include <bsd.kmod.mk>

Added: projects/tcp_ffcaia2008_8.x/sys/netinet/siftr.c
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ projects/tcp_ffcaia2008_8.x/sys/netinet/siftr.c	Thu Aug 13 15:14:02 2009	(r196191)
@@ -0,0 +1,1786 @@
+/*-
+ * Copyright (c) 2007-2009, Centre for Advanced Internet Architectures
+ * Swinburne University of Technology, Melbourne, Australia
+ * (CRICOS number 00111D).
+ *
+ * All rights reserved.
+ *
+ * SIFTR was first released in 2007 by James Healy and Lawrence Stewart whilst
+ * working on the NewTCP research project at Swinburne University's Centre for
+ * Advanced Internet Architectures, Melbourne, Australia, which was made
+ * possible in part by a grant from the Cisco University Research Program Fund
+ * at Community Foundation Silicon Valley. More details are available at:
+ *   http://caia.swin.edu.au/urp/newtcp/
+ *
+ * Work on SIFTR v1.2.x was sponsored by the FreeBSD Foundation as part of
+ * the "Enhancing the FreeBSD TCP Implementation" project 2008-2009.
+ * More details are available at:
+ *   http://www.freebsdfoundation.org/
+ *   http://caia.swin.edu.au/freebsd/etcp09/
+ *
+ * Lawrence Stewart is currently the sole maintainer, and all contact regarding
+ * SIFTR should be directed to him via email: lastewart@swin.edu.au
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. The names of the authors, the "Centre for Advanced Internet
+ *    Architectures" and "Swinburne University of Technology" may not be used
+ *    to endorse or promote products derived from this software without
+ *    specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHORS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/******************************************************
+ * Statistical Information For TCP Research (SIFTR)
+ *
+ * A FreeBSD kernel module that adds very basic intrumentation to the
+ * TCP stack, allowing internal stats to be recorded to a log file
+ * during experimentation.
+ *
+ * Initial release date: June 2007
+ * Most recent update: July 2009
+ ******************************************************/
+
+
+#include <sys/param.h>
+#include <sys/errno.h>
+#include <sys/kernel.h>
+#include <sys/kthread.h>
+#include <sys/lock.h>
+#include <sys/mutex.h>
+#include <sys/module.h>
+#include <sys/unistd.h>
+#include <sys/sysctl.h>
+#include <sys/mbuf.h>
+#include <sys/socket.h>
+#include <sys/socketvar.h>
+#include <sys/sbuf.h>
+#include <sys/alq.h>
+#include <sys/proc.h>
+#if (__FreeBSD_version >= 800044)
+#include <sys/vimage.h>
+#endif
+
+#include <net/if.h>
+#include <net/pfil.h>
+
+#include <netinet/in.h>
+#include <netinet/in_systm.h>
+#include <netinet/ip.h>
+#include <netinet/tcp_var.h>
+#include <netinet/in_pcb.h>
+#include <netinet/in_var.h>
+
+#if (__FreeBSD_version >= 800044)
+#include <netinet/vinet.h>
+#endif
+
+#ifdef SIFTR_IPV6
+#include <netinet/ip6.h>
+#include <netinet6/in6_pcb.h>
+
+#if (__FreeBSD_version >= 800044)
+#include <netinet6/vinet6.h>
+#endif
+
+#endif /* SIFTR_IPV6 */
+
+#include <machine/in_cksum.h>
+
+#include "siftr_hash.h"
+
+
+#define MODVERSION  "1.2.2"
+#define HOOK 0
+#define UNHOOK 1
+
+#define SIFTR_EXPECTED_MAX_TCP_FLOWS 65536
+
+
+#define SYS_NAME "FreeBSD"
+#define PACKET_TAG_SIFTR 100
+#define PACKET_COOKIE_SIFTR 21749576
+#define SIFTR_LOG_FILE_MODE 0644
+
+/*
+ * log messages are less than MAX_LOG_MSG_LEN chars long, so divide
+ * SIFTR_ALQ_BUFLEN by MAX_LOG_MSG_LEN to get the approximate upper bound
+ * number of log messages that can be held in the ALQ buffer
+ */
+#define MAX_LOG_MSG_LEN 200
+#define SIFTR_ALQ_BUFLEN 200000
+
+
+#define SIFTR_DISABLE 0
+#define SIFTR_ENABLE 1
+
+
+/*
+ * 1 byte for IP version
+ * IPv4: src/dst IP (4+4) + src/dst port (2+2) = 12 bytes
+ * IPv6: src/dst IP (16+16) + src/dst port (2+2) = 36 bytes
+ */
+#ifdef SIFTR_IPV6
+#define FLOW_KEY_LEN 37
+#else
+#define FLOW_KEY_LEN 13
+#endif
+
+#ifdef SIFTR_IPV6
+#define SIFTR_IPMODE 6
+#else
+#define SIFTR_IPMODE 4
+#endif
+
+/* useful macros */
+#define CAST_PTR_INT(X) (*((int*)(X)))
+
+#define TOTAL_TCP_PKTS \
+( \
+	siftr_num_inbound_tcp_pkts + \
+	siftr_num_outbound_tcp_pkts \
+)
+
+#define TOTAL_SKIPPED_TCP_PKTS \
+( \
+	siftr_num_inbound_skipped_pkts_malloc + \
+	siftr_num_inbound_skipped_pkts_mtx + \
+	siftr_num_outbound_skipped_pkts_malloc + \
+	siftr_num_outbound_skipped_pkts_mtx + \
+	siftr_num_inbound_skipped_pkts_tcb + \
+	siftr_num_outbound_skipped_pkts_tcb + \
+	siftr_num_inbound_skipped_pkts_icb + \
+	siftr_num_outbound_skipped_pkts_icb \
+)
+
+#define UPPER_SHORT(X)	(((X) & 0xFFFF0000) >> 16)
+#define LOWER_SHORT(X)	((X) & 0x0000FFFF)
+
+#define FIRST_OCTET(X)	(((X) & 0xFF000000) >> 24)
+#define SECOND_OCTET(X)	(((X) & 0x00FF0000) >> 16)
+#define THIRD_OCTET(X)	(((X) & 0x0000FF00) >> 8)
+#define FOURTH_OCTET(X)	((X) & 0x000000FF)
+
+MALLOC_DECLARE(M_SIFTR);
+MALLOC_DEFINE(M_SIFTR, "siftr", "dynamic memory used by SIFTR");
+
+MALLOC_DECLARE(M_SIFTR_PKTNODE);
+MALLOC_DEFINE(M_SIFTR_PKTNODE, "siftr_pktnode", "SIFTR pkt_node struct");
+
+MALLOC_DECLARE(M_SIFTR_HASHNODE);
+MALLOC_DEFINE(M_SIFTR_HASHNODE, "siftr_hashnode", "SIFTR flow_hash_node struct");
+
+/* Struct that will make up links in the pkt manager queue */
+struct pkt_node {
+	/* timestamp of pkt as noted in the pfil hook */
+	struct timeval		tval;
+	/* direction pkt is travelling; either PFIL_IN or PFIL_OUT */
+	uint8_t			direction;
+	/* IP version pkt_node relates to; either INP_IPV4 or INP_IPV6 */
+	uint8_t			ipver;
+	/* hash of the pkt which triggered the log message */
+	uint32_t		hash;
+	/* local/foreign IP address */
+#ifdef SIFTR_IPV6
+	uint32_t		ip_laddr[4];
+	uint32_t		ip_faddr[4];
+#else
+	uint8_t			ip_laddr[4];
+	uint8_t			ip_faddr[4];
+#endif
+	/* local TCP port */
+	uint16_t		tcp_localport;
+	/* foreign TCP port */
+	uint16_t		tcp_foreignport;
+	/* Congestion Window (bytes) */
+	u_long			snd_cwnd;
+	/* Sending Window (bytes) */
+	u_long			snd_wnd;
+	/* Receive Window (bytes) */
+	u_long			rcv_wnd;
+	/* Bandwidth Controlled Window (bytes) */
+	u_long			snd_bwnd;
+	/* Slow Start Threshold (bytes) */
+	u_long			snd_ssthresh;
+	/* Current state of the TCP FSM */
+	int			conn_state;
+	/* Max Segment Size (bytes) */
+	u_int			max_seg_size;
+	/*
+	 * Smoothed RTT stored as found in the TCP control block
+	 * in units of (TCP_RTT_SCALE*hz)
+	 */
+	int			smoothed_rtt;
+	/* Is SACK enabled? */
+	u_char			sack_enabled;
+	/* Window scaling for snd window */
+	u_char			snd_scale;
+	/* Window scaling for recv window */
+	u_char			rcv_scale;
+	/* TCP control block flags */
+	u_int			flags;
+	/* Retransmit timeout length */
+	int			rxt_length;
+	/* Size of the TCP send buffer in bytes */
+	u_int			snd_buf_hiwater;
+	/* Current num bytes in the send socket buffer */
+	u_int			snd_buf_cc;
+	/* Size of the TCP receive buffer in bytes */
+	u_int			rcv_buf_hiwater;
+	/* Current num bytes in the receive socket buffer */
+	u_int			rcv_buf_cc;
+	/* Number of bytes inflight that we are waiting on ACKs for */
+	u_int			sent_inflight_bytes;
+	/* Link to next pkt_node in the list */
+	STAILQ_ENTRY(pkt_node)	nodes;
+};
+
+/* Struct that will be stored in the TCP flow hash table */
+struct flow_hash_node
+{
+  uint16_t counter;
+  uint8_t key[FLOW_KEY_LEN];
+  LIST_ENTRY(flow_hash_node) nodes;
+};
+
+/* various runtime stats variables */
+static volatile uint32_t siftr_num_inbound_skipped_pkts_malloc = 0;
+static volatile uint32_t siftr_num_inbound_skipped_pkts_mtx = 0;
+static volatile uint32_t siftr_num_outbound_skipped_pkts_malloc = 0;
+static volatile uint32_t siftr_num_outbound_skipped_pkts_mtx = 0;
+static volatile uint32_t siftr_num_inbound_skipped_pkts_icb = 0;
+static volatile uint32_t siftr_num_outbound_skipped_pkts_icb = 0;
+static volatile uint32_t siftr_num_inbound_skipped_pkts_tcb = 0;
+static volatile uint32_t siftr_num_outbound_skipped_pkts_tcb = 0;
+static volatile uint32_t siftr_num_inbound_skipped_pkts_dejavu = 0;
+static volatile uint32_t siftr_num_outbound_skipped_pkts_dejavu = 0;
+static volatile uint32_t siftr_num_inbound_tcp_pkts = 0;
+static volatile uint32_t siftr_num_outbound_tcp_pkts = 0;
+
+static volatile uint32_t siftr_exit_pkt_manager_thread = 0;
+static uint8_t siftr_enabled = 0;
+static uint32_t siftr_pkts_per_log = 1;
+static char siftr_logfile[PATH_MAX] = "/var/log/siftr.log\0";
+
+/*
+ * Controls whether we generate a hash for each packet that triggers
+ * a SIFTR log message. Should eventually be made accessible via sysctl.
+ */
+static uint8_t siftr_generate_hashes = 1;
+
+/*
+ * pfil.h defines PFIL_IN as 1 and PFIL_OUT as 2,
+ * which we use as an index into this array.
+ */
+static char direction[3] = {'\0', 'i','o'};
+
+
+static char *log_writer_msg_buf;
+STAILQ_HEAD(pkthead, pkt_node) pkt_queue = STAILQ_HEAD_INITIALIZER(pkt_queue);
+
+
+static u_long siftr_hashmask;
+LIST_HEAD(listhead, flow_hash_node) *counter_hash;
+
+static int wait_for_pkt;
+
+static struct alq *siftr_alq = NULL;
+static struct mtx siftr_pkt_queue_mtx;
+
+static struct mtx siftr_pkt_mgr_mtx;
+
+
+
+static struct thread *siftr_pkt_manager_thr = NULL;
+#if (__FreeBSD_version < 800000)
+static struct proc *siftr_pkt_manager_proc = NULL;
+#endif
+
+#if (__FreeBSD_version >= 800044)
+#define _siftrtcbinfo &V_tcbinfo
+#else
+#define _siftrtcbinfo &tcbinfo
+#endif
+
+
+
+static void
+siftr_process_pkt(struct pkt_node * pkt_node)
+{
+	char siftr_log_msg[MAX_LOG_MSG_LEN];
+	uint8_t found_match = 0;
+	uint8_t key[FLOW_KEY_LEN];
+	uint8_t key_offset = 1;
+	struct flow_hash_node *hash_node = NULL;
+	struct listhead *counter_list = NULL;
+	
+	/*
+	 * Create the key that will be used to create a hash index
+	 * into our hash table.
+	 * Our key consists of ipversion,localip,localport,foreignip,foreignport
+	 */
+	key[0] = pkt_node->ipver;
+	memcpy(	key + key_offset,
+		(void *)(&(pkt_node->ip_laddr)),
+		sizeof(pkt_node->ip_laddr)
+	);
+	key_offset += sizeof(pkt_node->ip_laddr);
+	memcpy(	key + key_offset,
+		(void *)(&(pkt_node->tcp_localport)),
+		sizeof(pkt_node->tcp_localport)
+	);
+	key_offset += sizeof(pkt_node->tcp_localport);
+	memcpy(	key + key_offset,
+		(void *)(&(pkt_node->ip_faddr)),
+		sizeof(pkt_node->ip_faddr)
+	);
+	key_offset += sizeof(pkt_node->ip_faddr);
+	memcpy(	key + key_offset,
+		(void *)(&(pkt_node->tcp_foreignport)),
+		sizeof(pkt_node->tcp_foreignport)
+	);
+	
+	counter_list = (counter_hash + 
+			(hash32_buf(key, sizeof(key), 0) & siftr_hashmask));
+	
+	/*
+	 * If the list is not empty i.e. the hash index has
+	 * been used by another flow previously.
+	 */
+	if(LIST_FIRST(counter_list) != NULL) {
+		/*
+		 * Loop through the hash nodes in the list.
+		 * There should normally only be 1 hash node in the list,
+		 * except if there have been collisions at the hash index
+		 * computed by hash32_buf()
+		 */
+		LIST_FOREACH(hash_node, counter_list, nodes) {
+			/*
+			 * Check if the key for the pkt we are currently
+			 * processing is the same as the key stored in the
+			 * hash node we are currently processing.
+			 * If they are the same, then we've found the
+			 * hash node that stores the counter for the flow
+			 * the pkt belongs to
+			 */
+			if (memcmp(hash_node->key, key, sizeof(key)) == 0) {
+				found_match = 1;
+				break;
+			}
+		}
+	}
+
+	/* If this flow hash hasn't been seen before or we have a collision */
+	if (hash_node == NULL || !found_match) {
+		/* Create a new hash node to store the flow's counter */
+		hash_node = malloc(	sizeof(struct flow_hash_node),
+					M_SIFTR_HASHNODE,
+					M_WAITOK
+		);
+
+		if (hash_node != NULL) {
+			/* Initialise our new hash node list entry */
+			hash_node->counter = 0;
+			memcpy(hash_node->key, key, sizeof(key));
+			LIST_INSERT_HEAD(counter_list, hash_node, nodes);
+		}
+		else {
+			/* malloc failed */
+			if (pkt_node->direction == PFIL_IN)
+				siftr_num_inbound_skipped_pkts_malloc++;
+			else
+				siftr_num_outbound_skipped_pkts_malloc++;
+
+			return;
+		}
+	}
+	else if (siftr_pkts_per_log > 1) {
+		/*
+		 * Taking the remainder of the counter divided
+		 * by the current value of siftr_pkts_per_log
+		 * and storing that in counter provides a neat
+		 * way to modulate the frequency of log
+		 * messages being written to the log file
+		 */
+		hash_node->counter = (hash_node->counter + 1) %
+						siftr_pkts_per_log;
+
+		/*
+		 * If we have not seen enough packets since the last time
+		 * we wrote a log message for this connection, return
+		 */
+		if (hash_node->counter > 0)
+			return;
+	}
+
+#ifdef SIFTR_IPV6
+	pkt_node->ip_laddr[3] = ntohl(pkt_node->ip_laddr[3]);
+	pkt_node->ip_faddr[3] = ntohl(pkt_node->ip_faddr[3]);
+
+	if (pkt_node->ipver == INP_IPV6) { /* IPv6 packet */
+		pkt_node->ip_laddr[0] = ntohl(pkt_node->ip_laddr[0]);
+		pkt_node->ip_laddr[1] = ntohl(pkt_node->ip_laddr[1]);
+		pkt_node->ip_laddr[2] = ntohl(pkt_node->ip_laddr[2]);
+		pkt_node->ip_faddr[0] = ntohl(pkt_node->ip_faddr[0]);
+		pkt_node->ip_faddr[1] = ntohl(pkt_node->ip_faddr[1]);
+		pkt_node->ip_faddr[2] = ntohl(pkt_node->ip_faddr[2]);
+
+		/* Construct an IPv6 log message. */
+		sprintf(siftr_log_msg,
+#if (__FreeBSD_version >= 700000)
+			"%c,0x%08x,%zd.%06ld,%x:%x:%x:%x:%x:%x:%x:%x,%u,%x:%x:%x:%x:%x:%x:%x:%x,%u,%ld,%ld,%ld,%ld,%ld,%u,%u,%u,%u,%u,%u,%u,%d,%u,%u,%u,%u,%u\n",
+#else
+			"%c,0x%08x,%ld.%06ld,%x:%x:%x:%x:%x:%x:%x:%x,%u,%x:%x:%x:%x:%x:%x:%x:%x,%u,%ld,%ld,%ld,%ld,%ld,%u,%u,%u,%u,%u,%u,%u,%d,%u,%u,%u,%u,%u\n",
+#endif
+			direction[pkt_node->direction],
+			pkt_node->hash,
+			pkt_node->tval.tv_sec,
+			pkt_node->tval.tv_usec,
+			UPPER_SHORT(pkt_node->ip_laddr[0]),
+			LOWER_SHORT(pkt_node->ip_laddr[0]),
+			UPPER_SHORT(pkt_node->ip_laddr[1]),
+			LOWER_SHORT(pkt_node->ip_laddr[1]),
+			UPPER_SHORT(pkt_node->ip_laddr[2]),
+			LOWER_SHORT(pkt_node->ip_laddr[2]),
+			UPPER_SHORT(pkt_node->ip_laddr[3]),
+			LOWER_SHORT(pkt_node->ip_laddr[3]),
+			ntohs(pkt_node->tcp_localport),
+			UPPER_SHORT(pkt_node->ip_faddr[0]),
+			LOWER_SHORT(pkt_node->ip_faddr[0]),
+			UPPER_SHORT(pkt_node->ip_faddr[1]),
+			LOWER_SHORT(pkt_node->ip_faddr[1]),
+			UPPER_SHORT(pkt_node->ip_faddr[2]),
+			LOWER_SHORT(pkt_node->ip_faddr[2]),
+			UPPER_SHORT(pkt_node->ip_faddr[3]),
+			LOWER_SHORT(pkt_node->ip_faddr[3]),
+			ntohs(pkt_node->tcp_foreignport),
+			pkt_node->snd_ssthresh,
+			pkt_node->snd_cwnd,
+			pkt_node->snd_bwnd,
+			pkt_node->snd_wnd,
+			pkt_node->rcv_wnd,
+			pkt_node->snd_scale,
+			pkt_node->rcv_scale,
+			pkt_node->conn_state,
+			pkt_node->max_seg_size,
+			pkt_node->smoothed_rtt,
+			pkt_node->sack_enabled,
+			pkt_node->flags,
+			pkt_node->rxt_length,
+			pkt_node->snd_buf_hiwater,
+			pkt_node->snd_buf_cc,
+			pkt_node->rcv_buf_hiwater,
+			pkt_node->rcv_buf_cc,
+			pkt_node->sent_inflight_bytes
+		);
+	} else { /* IPv4 packet */
+		pkt_node->ip_laddr[0] = FIRST_OCTET(pkt_node->ip_laddr[3]);
+		pkt_node->ip_laddr[1] = SECOND_OCTET(pkt_node->ip_laddr[3]);
+		pkt_node->ip_laddr[2] = THIRD_OCTET(pkt_node->ip_laddr[3]);
+		pkt_node->ip_laddr[3] = FOURTH_OCTET(pkt_node->ip_laddr[3]);
+		pkt_node->ip_faddr[0] = FIRST_OCTET(pkt_node->ip_faddr[3]);
+		pkt_node->ip_faddr[1] = SECOND_OCTET(pkt_node->ip_faddr[3]);
+		pkt_node->ip_faddr[2] = THIRD_OCTET(pkt_node->ip_faddr[3]);
+		pkt_node->ip_faddr[3] = FOURTH_OCTET(pkt_node->ip_faddr[3]);
+#endif /* SIFTR_IPV6 */
+
+		/* Construct an IPv4 log message. */
+		sprintf(siftr_log_msg,
+#if (__FreeBSD_version >= 700000)
+			"%c,0x%08x,%zd.%06ld,%u.%u.%u.%u,%u,%u.%u.%u.%u,%u,%ld,%ld,%ld,%ld,%ld,%u,%u,%u,%u,%u,%u,%u,%d,%u,%u,%u,%u,%u\n",
+#else
+			"%c,0x%08x,%ld.%06ld,%u.%u.%u.%u,%u,%u.%u.%u.%u,%u,%ld,%ld,%ld,%ld,%ld,%u,%u,%u,%u,%u,%u,%u,%d,%u,%u,%u,%u,%u\n",
+#endif
+			direction[pkt_node->direction],
+			pkt_node->hash,
+			pkt_node->tval.tv_sec,
+			pkt_node->tval.tv_usec,
+			pkt_node->ip_laddr[0],
+			pkt_node->ip_laddr[1],
+			pkt_node->ip_laddr[2],
+			pkt_node->ip_laddr[3],
+			ntohs(pkt_node->tcp_localport),
+			pkt_node->ip_faddr[0],
+			pkt_node->ip_faddr[1],
+			pkt_node->ip_faddr[2],
+			pkt_node->ip_faddr[3],
+			ntohs(pkt_node->tcp_foreignport),
+			pkt_node->snd_ssthresh,
+			pkt_node->snd_cwnd,
+			pkt_node->snd_bwnd,
+			pkt_node->snd_wnd,
+			pkt_node->rcv_wnd,
+			pkt_node->snd_scale,
+			pkt_node->rcv_scale,
+			pkt_node->conn_state,
+			pkt_node->max_seg_size,
+			pkt_node->smoothed_rtt,
+			pkt_node->sack_enabled,
+			pkt_node->flags,
+			pkt_node->rxt_length,
+			pkt_node->snd_buf_hiwater,
+			pkt_node->snd_buf_cc,
+			pkt_node->rcv_buf_hiwater,
+			pkt_node->rcv_buf_cc,
+			pkt_node->sent_inflight_bytes
+		);
+#ifdef SIFTR_IPV6
+	}
+#endif
+
+	/*
+	 * XXX: This could possibly be made more efficient by padding
+	 * the log message to always be a fixed number of characters...
+	 * We wouldn't need the call to strlen if we did this
+	 */
+	/* XXX: Should we use alq_getn/alq_post here to avoid the bcopy? */
+	alq_writen(siftr_alq, siftr_log_msg, strlen(siftr_log_msg), ALQ_WAITOK);
+}
+
+
+
+
+
+
+
+static void
+siftr_pkt_manager_thread(void *arg)
+{
+	struct pkt_node *pkt_node, *pkt_node_temp;
+	STAILQ_HEAD(pkthead, pkt_node) tmp_pkt_queue = STAILQ_HEAD_INITIALIZER(tmp_pkt_queue);
+	uint8_t draining = 2;
+
+	mtx_lock(&siftr_pkt_mgr_mtx);
+
+	/* draining == 0 when queue has been flushed and it's safe to exit */
+	while (draining) {
+		/*
+		 * Sleep until we are signalled to wake because thread has
+		 * been told to exit or until 1 tick has passed
+		 */
+		msleep(&wait_for_pkt, &siftr_pkt_mgr_mtx, PWAIT, "pktwait", 1);
+
+		/* Gain exclusive access to the pkt_node queue */
+		mtx_lock(&siftr_pkt_queue_mtx);
+
+		/*
+		 * Move pkt_queue to tmp_pkt_queue, which leaves
+		 * pkt_queue empty and ready to receive more pkt_nodes
+		 */
+		STAILQ_CONCAT(&tmp_pkt_queue, &pkt_queue);
+
+		/*
+		 * We've finished making changes to the list. Unlock it
+		 * so the pfil hooks can continue queuing pkt_nodes
+		 */
+		mtx_unlock(&siftr_pkt_queue_mtx);
+
+		/*
+		 * We can't hold a mutex whilst calling siftr_process_pkt
+		 * because ALQ might sleep waiting for buffer space.
+		 */
+		mtx_unlock(&siftr_pkt_mgr_mtx);
+
+		/* Flush all pkt_nodes to the log file */
+		STAILQ_FOREACH_SAFE(pkt_node,
+				&tmp_pkt_queue,
+				nodes,
+				pkt_node_temp) {
+			siftr_process_pkt(pkt_node);
+			STAILQ_REMOVE_HEAD(&tmp_pkt_queue, nodes);
+			free(pkt_node, M_SIFTR_PKTNODE);
+		}
+
+		KASSERT(STAILQ_EMPTY(&tmp_pkt_queue),
+			("SIFTR tmp_pkt_queue not empty after flush")
+		);
+
+		mtx_lock(&siftr_pkt_mgr_mtx);
+
+		/*
+		 * If siftr_exit_pkt_manager_thread gets set during the window
+		 * where we are draining the tmp_pkt_queue above, there might
+		 * still be pkts in pkt_queue that need to be drained.
+		 * Allow one further iteration to occur after
+		 * siftr_exit_pkt_manager_thread has been set to ensure
+		 * pkt_queue is completely empty before we kill the thread.
+		 *
+		 * siftr_exit_pkt_manager_thread is set only after the pfil
+		 * hooks have been removed, so only 1 extra iteration
+		 * is needed to drain the queue.
+		 */
+		if (siftr_exit_pkt_manager_thread)
+			draining--;
+	}
+
+	mtx_unlock(&siftr_pkt_mgr_mtx);
+
+#if (__FreeBSD_version >= 800000)
+	/* calls wakeup on this thread's struct thread ptr */
+	kthread_exit();
+#else
+#if (__FreeBSD_version < 700000)
+	/* no wakeup given in 6.x so have to do it ourself */
+	wakeup(siftr_pkt_manager_proc);
+#endif
+	/* calls wakeup on this thread's struct proc ptr on 7.x */
+	kthread_exit(0);
+#endif
+}
+
+static uint32_t
+hash_pkt(struct mbuf *m, uint32_t offset)
+{
+	register uint32_t hash = 0;
+
+	while ((m != NULL) && (offset > m->m_len)) {
+		/*
+		 * The IP packet payload does not start in this mbuf, so
+		 * need to figure out which mbuf it starts in and what offset
+		 * into the mbuf's data region the payload starts at.
+		 */
+		offset -= m->m_len;
+		m = m->m_next;
+	}
+
+	while (m != NULL) {
+		/* Ensure there is data in the mbuf */
+		if ((m->m_len - offset) > 0) {
+			hash = hash32_buf(	m->m_data + offset,
+						m->m_len - offset,
+						hash
+			);
+                }
+
+		m = m->m_next;
+		offset = 0;
+        }
+
+	return hash;
+}
+
+/*
+ * pfil hook that is called for each IPv4 packet making its way through the
+ * stack in either direction.
+ * The pfil subsystem holds a non-sleepable mutex somewhere when
+ * calling our hook function, so we can't sleep at all.
+ * It's very important to use the M_NOWAIT flag with all function calls
+ * that support it so that they won't sleep, otherwise you get a panic
+ */
+static int
+siftr_chkpkt(	void *arg,
+		struct mbuf **m,
+		struct ifnet *ifp,
+		int dir,
+		struct inpcb *inp
+)
+{
+	register struct pkt_node *pkt_node = NULL;
+	register struct ip *ip = NULL;
+	register struct tcphdr *th = NULL;
+	register struct tcpcb *tp = NULL;
+	register unsigned int ip_hl = 0;
+	register uint8_t inp_locally_locked = 0;
+
+	/*
+	 * I don't think we need m_pullup here because both
+	 * ip_input and ip_output seem to do the heavy lifting
+	 */
+	/* *m = m_pullup(*m, sizeof(struct ip));
+	if (*m == NULL)
+		goto ret; */
+
+	/* Cram the mbuf into an ip packet struct */
+	ip = mtod(*m, struct ip *);
+
+	/* Only continue processing if the packet is TCP */
+	if(ip->ip_p != IPPROTO_TCP)
+		goto ret;
+	
+	/*
+	 * If a kernel subsystem reinjects packets into the stack, our pfil
+	 * hook will be called multiple times for the same packet.
+	 * Make sure we only process unique packets.
+	 */
+	if (m_tag_locate(*m, PACKET_COOKIE_SIFTR, PACKET_TAG_SIFTR, NULL)
+	    != NULL) {
+
+		if(dir == PFIL_IN)
+			siftr_num_inbound_skipped_pkts_dejavu++;
+		else
+			siftr_num_outbound_skipped_pkts_dejavu++;
+
+		goto ret;
+	}
+	else {
+		struct m_tag *tag = m_tag_alloc( PACKET_COOKIE_SIFTR,
+						 PACKET_TAG_SIFTR,
+						 0,
+						 M_NOWAIT
+		);
+		if (tag == NULL) {
+			if(dir == PFIL_IN)
+				siftr_num_inbound_skipped_pkts_malloc++;
+			else
+				siftr_num_outbound_skipped_pkts_malloc++;
+
+			goto ret;
+		}
+
+		m_tag_prepend(*m, tag);
+	}
+
+	if(dir == PFIL_IN)
+		siftr_num_inbound_tcp_pkts++;
+	else
+		siftr_num_outbound_tcp_pkts++;
+
+	/*
+	 * Create a tcphdr struct starting at the correct offset
+	 * in the IP packet. ip->ip_hl gives the ip header length
+	 * in 4-byte words, so multiply it to get the size in bytes
+	 */
+	ip_hl = (ip->ip_hl << 2);
+	th = (struct tcphdr *)((caddr_t)ip + ip_hl);
+
+	/*
+	 * If the pfil hooks don't provide a pointer to the
+	 * IP control block, we need to find it ourselves and lock it
+	 */
+	if (!inp) {
+		/* Find the corresponding inpcb for this pkt */
+
+		/* We need the tcbinfo lock */
+#if (__FreeBSD_version >= 700000)
+		INP_INFO_UNLOCK_ASSERT(_siftrtcbinfo);
+#endif
+		INP_INFO_RLOCK(_siftrtcbinfo);
+
+		if (dir == PFIL_IN)
+			inp = in_pcblookup_hash(_siftrtcbinfo,
+						ip->ip_src,
+						th->th_sport,
+						ip->ip_dst,
+						th->th_dport,
+						0,
+						(*m)->m_pkthdr.rcvif
+			);
+		else
+			inp = in_pcblookup_hash(_siftrtcbinfo,
+						ip->ip_dst,
+						th->th_dport,
+						ip->ip_src,
+						th->th_sport,
+						0,
+						(*m)->m_pkthdr.rcvif
+			);
+
+		/* If we can't find the IP control block, bail */
+		if (!inp) {
+			if(dir == PFIL_IN)
+				siftr_num_inbound_skipped_pkts_icb++;
+			else
+				siftr_num_outbound_skipped_pkts_icb++;
+
+			INP_INFO_RUNLOCK(_siftrtcbinfo);
+
+			goto ret;
+		}
+
+		/* Acquire the inpcb lock */
+		INP_UNLOCK_ASSERT(inp);
+#if (__FreeBSD_version >= 701000)
+		INP_RLOCK(inp);
+#else
+		INP_LOCK(inp);
+#endif
+		INP_INFO_RUNLOCK(_siftrtcbinfo);
+
+		inp_locally_locked = 1;
+	}
+
+	INP_LOCK_ASSERT(inp);
+
+	pkt_node = malloc(sizeof(struct pkt_node), M_SIFTR_PKTNODE, M_NOWAIT | M_ZERO);
+	
+	if (pkt_node == NULL) {
+
+		if(dir == PFIL_IN)
+			siftr_num_inbound_skipped_pkts_malloc++;
+		else
+			siftr_num_outbound_skipped_pkts_malloc++;
+
+		goto inp_unlock;
+	}
+
+	/* Find the TCP control block that corresponds with this packet */
+	tp = intotcpcb(inp);
+
+	/*
+	 * If we can't find the TCP control block (happens occasionaly for a
+	 * packet sent during the shutdown phase of a TCP connection),
+	 * or we're in the timewait state, bail
+	 */
+#if (INP_TIMEWAIT == 0x8)
+	if (!tp || (inp->inp_vflag & INP_TIMEWAIT)) {
+#else
+	if (!tp || (inp->inp_flags & INP_TIMEWAIT)) {
+#endif
+		if(dir == PFIL_IN)
+			siftr_num_inbound_skipped_pkts_tcb++;
+		else
+			siftr_num_outbound_skipped_pkts_tcb++;
+
+		free(pkt_node, M_SIFTR_PKTNODE);
+		goto inp_unlock;
+	}
+
+	/* Fill in pkt_node data */
+#ifdef SIFTR_IPV6
+	pkt_node->ip_laddr[3] = inp->inp_laddr.s_addr;
+	pkt_node->ip_faddr[3] = inp->inp_faddr.s_addr;
+#else
+	*((uint32_t *)pkt_node->ip_laddr) = inp->inp_laddr.s_addr;
+	*((uint32_t *)pkt_node->ip_faddr) = inp->inp_faddr.s_addr;
+#endif
+	pkt_node->ipver = INP_IPV4;
+	pkt_node->tcp_localport = inp->inp_lport;
+	pkt_node->tcp_foreignport = inp->inp_fport;
+	pkt_node->snd_cwnd = tp->snd_cwnd;
+	pkt_node->snd_wnd = tp->snd_wnd;
+	pkt_node->rcv_wnd = tp->rcv_wnd;
+	pkt_node->snd_bwnd = tp->snd_bwnd;
+	pkt_node->snd_ssthresh = tp->snd_ssthresh;
+	pkt_node->snd_scale = tp->snd_scale;
+	pkt_node->rcv_scale = tp->rcv_scale;
+	pkt_node->conn_state = tp->t_state;
+	pkt_node->max_seg_size = tp->t_maxseg;
+	pkt_node->smoothed_rtt = tp->t_srtt;
+#if (__FreeBSD_version >= 700000)
+	pkt_node->sack_enabled = tp->t_flags & TF_SACK_PERMIT;
+#else
+	pkt_node->sack_enabled = tp->sack_enable;
+#endif
+	pkt_node->flags = tp->t_flags;
+	pkt_node->rxt_length = tp->t_rxtcur;
+	pkt_node->snd_buf_hiwater = inp->inp_socket->so_snd.sb_hiwat;
+	pkt_node->snd_buf_cc = inp->inp_socket->so_snd.sb_cc;
+	pkt_node->rcv_buf_hiwater = inp->inp_socket->so_rcv.sb_hiwat;
+	pkt_node->rcv_buf_cc = inp->inp_socket->so_rcv.sb_cc;
+	pkt_node->sent_inflight_bytes = tp->snd_max - tp->snd_una;
+
+	/* We've finished accessing the tcb so release the lock */
+	if (inp_locally_locked)
+#if (__FreeBSD_version >= 701000)
+		INP_RUNLOCK(inp);
+#else
+		INP_UNLOCK(inp);
+#endif
+
+	pkt_node->direction = dir;
+
+	/*
+	 * Significantly more accurate than using getmicrotime(), but slower!
+	 * Gives true microsecond resolution at the expense of a hit to
+	 * maximum pps throughput processing when SIFTR is loaded and enabled.
+	 */
+	microtime(&(pkt_node->tval));
+
+	if (siftr_generate_hashes) {
+
+		if ((*m)->m_pkthdr.csum_flags & CSUM_TCP) {
+			/*
+			 * For outbound packets, the TCP checksum isn't
+			 * calculated yet. This is a problem for our packet
+			 * hashing as the receiver will calc a different hash
+			 * to ours if we don't include the correct TCP checksum
+			 * in the bytes being hashed. To work around this
+			 * problem, we manually calc the TCP checksum here in
+			 * software. We unset the CSUM_TCP flag so the lower
+			 * layers don't recalc it.
+			 */
+			(*m)->m_pkthdr.csum_flags &= ~CSUM_TCP;
+	
+			/*
+			 * Calculate the TCP checksum in software and assign
+			 * to correct TCP header field, which will follow the
+			 * packet mbuf down the stack. The trick here is that
+			 * tcp_output() sets th->th_sum to the checksum of the
+			 * pseudo header for us already. Because of the nature
+			 * of the checksumming algorithm, we can sum over the
+			 * entire IP payload (i.e. TCP header and data), which
+			 * will include the already calculated pseduo header
+			 * checksum, thus giving us the complete TCP checksum.
+			 *
+			 * To put it in simple terms, if checksum(1,2,3,4)=10,
+			 * then checksum(1,2,3,4,5) == checksum(10,5).
+			 * This property is what allows us to "cheat" and
+			 * checksum only the IP payload which has the TCP
+			 * th_sum field populated with the pseudo header's
+			 * checksum, and not need to futz around checksumming
+			 * pseudo header bytes and TCP header/data in one hit.
+			 * Refer to RFC 1071 for more info.
+			 *
+			 * NB: in_cksum_skip(struct mbuf *m, int len, int skip)
+			 * in_cksum_skip 2nd argument is NOT the number of
+			 * bytes to read from the mbuf at "skip" bytes offset
+			 * from the start of the mbuf (very counter intuitive!).
+			 * The number of bytes to read is calculated internally
+			 * by the function as len-skip i.e. to sum over the IP
+			 * payload (TCP header + data) bytes, it is INCORRECT
+			 * to call the function like this:
+			 * in_cksum_skip(at, ip->ip_len - offset, offset)
+			 * Rather, it should be called like this:
+			 * in_cksum_skip(at, ip->ip_len, offset)
+			 * which means read "ip->ip_len - offset" bytes from
+			 * the mbuf cluster "at" at offset "offset" bytes from
+			 * the beginning of the "at" mbuf's data pointer.
+			 */
+			th->th_sum  = in_cksum_skip(*m, ip->ip_len, ip_hl);
+		}
+		/*
+		printf("th->th_sum: 0x%04x\n\n", th->th_sum);
+		nanotime(&start);
+		*/
+
+		/*
+		 * XXX: Having to calculate the checksum in software and then
+		 * hash over all bytes is really inefficient. Would be nice to
+		 * find a way to create the hash and checksum in the same pass
+		 * over the bytes.
+		 */
+		pkt_node->hash = hash_pkt(*m, ip_hl);
+		
+		/*
+		nanotime(&end);

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200908131514.n7DFE3wb004942>