From owner-svn-src-vendor@FreeBSD.ORG Tue Aug 18 16:14:00 2009 Return-Path: Delivered-To: svn-src-vendor@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 04DAA106568B; Tue, 18 Aug 2009 16:14:00 +0000 (UTC) (envelope-from mlaier@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id E19328FC62; Tue, 18 Aug 2009 16:13:59 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id n7IGDxfM022025; Tue, 18 Aug 2009 16:13:59 GMT (envelope-from mlaier@svn.freebsd.org) Received: (from mlaier@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id n7IGDxYr022018; Tue, 18 Aug 2009 16:13:59 GMT (envelope-from mlaier@svn.freebsd.org) Message-Id: <200908181613.n7IGDxYr022018@svn.freebsd.org> From: Max Laier Date: Tue, 18 Aug 2009 16:13:59 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-vendor@freebsd.org X-SVN-Group: vendor-sys MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r196360 - vendor-sys/pf/dist/net vendor-sys/pf/dist/netinet vendor/pf/dist/authpf vendor/pf/dist/ftp-proxy vendor/pf/dist/libevent vendor/pf/dist/man vendor/pf/dist/pfctl vendor/pf/dist... X-BeenThere: svn-src-vendor@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the vendor work area tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2009 16:14:00 -0000 Author: mlaier Date: Tue Aug 18 16:13:59 2009 New Revision: 196360 URL: http://svn.freebsd.org/changeset/base/196360 Log: eri@ wants to start on porting the latest pf in his user space so we can finally have a new version in 9.0. Import pf as of OPENBSD_4_5_BASE to help with that. Added: vendor-sys/pf/dist/net/if_pflow.c vendor-sys/pf/dist/net/if_pflow.h vendor-sys/pf/dist/net/pf_lb.c Modified: vendor-sys/pf/dist/net/if_pflog.c vendor-sys/pf/dist/net/if_pflog.h vendor-sys/pf/dist/net/if_pfsync.c vendor-sys/pf/dist/net/if_pfsync.h vendor-sys/pf/dist/net/pf.c vendor-sys/pf/dist/net/pf_if.c vendor-sys/pf/dist/net/pf_ioctl.c vendor-sys/pf/dist/net/pf_norm.c vendor-sys/pf/dist/net/pf_osfp.c vendor-sys/pf/dist/net/pf_ruleset.c vendor-sys/pf/dist/net/pf_table.c vendor-sys/pf/dist/net/pfvar.h vendor-sys/pf/dist/netinet/in4_cksum.c Changes in other areas also in this revision: Added: vendor/pf/dist/man/pflow.4 Modified: vendor/pf/dist/authpf/Makefile vendor/pf/dist/authpf/authpf.8 vendor/pf/dist/authpf/authpf.c vendor/pf/dist/authpf/pathnames.h vendor/pf/dist/ftp-proxy/Makefile vendor/pf/dist/ftp-proxy/filter.c vendor/pf/dist/ftp-proxy/filter.h vendor/pf/dist/ftp-proxy/ftp-proxy.8 vendor/pf/dist/ftp-proxy/ftp-proxy.c vendor/pf/dist/libevent/buffer.c vendor/pf/dist/libevent/evbuffer.c vendor/pf/dist/libevent/event-internal.h vendor/pf/dist/libevent/event.c vendor/pf/dist/libevent/event.h vendor/pf/dist/libevent/evsignal.h vendor/pf/dist/libevent/kqueue.c vendor/pf/dist/libevent/log.c vendor/pf/dist/libevent/log.h vendor/pf/dist/libevent/poll.c vendor/pf/dist/libevent/select.c vendor/pf/dist/libevent/signal.c vendor/pf/dist/man/pf.4 vendor/pf/dist/man/pf.conf.5 vendor/pf/dist/man/pf.os.5 vendor/pf/dist/man/pflog.4 vendor/pf/dist/man/pfsync.4 vendor/pf/dist/pfctl/Makefile vendor/pf/dist/pfctl/parse.y vendor/pf/dist/pfctl/pf_print_state.c vendor/pf/dist/pfctl/pfctl.8 vendor/pf/dist/pfctl/pfctl.c vendor/pf/dist/pfctl/pfctl.h vendor/pf/dist/pfctl/pfctl_altq.c vendor/pf/dist/pfctl/pfctl_optimize.c vendor/pf/dist/pfctl/pfctl_osfp.c vendor/pf/dist/pfctl/pfctl_parser.c vendor/pf/dist/pfctl/pfctl_parser.h vendor/pf/dist/pfctl/pfctl_qstats.c vendor/pf/dist/pfctl/pfctl_radix.c vendor/pf/dist/pfctl/pfctl_table.c vendor/pf/dist/pflogd/Makefile vendor/pf/dist/pflogd/pflogd.8 vendor/pf/dist/pflogd/pflogd.c vendor/pf/dist/pflogd/pflogd.h vendor/pf/dist/pflogd/privsep.c vendor/pf/dist/pflogd/privsep_fdpass.c vendor/pf/dist/tftp-proxy/Makefile vendor/pf/dist/tftp-proxy/filter.c vendor/pf/dist/tftp-proxy/filter.h vendor/pf/dist/tftp-proxy/tftp-proxy.8 vendor/pf/dist/tftp-proxy/tftp-proxy.c Modified: vendor-sys/pf/dist/net/if_pflog.c ============================================================================== --- vendor-sys/pf/dist/net/if_pflog.c Tue Aug 18 14:00:25 2009 (r196359) +++ vendor-sys/pf/dist/net/if_pflog.c Tue Aug 18 16:13:59 2009 (r196360) @@ -1,4 +1,4 @@ -/* $OpenBSD: if_pflog.c,v 1.27 2007/12/20 02:53:02 brad Exp $ */ +/* $OpenBSD: if_pflog.c,v 1.26 2007/10/18 21:58:18 mpf Exp $ */ /* * The authors of this code are John Ioannidis (ji@tla.org), * Angelos D. Keromytis (kermit@csd.uch.gr) and Modified: vendor-sys/pf/dist/net/if_pflog.h ============================================================================== --- vendor-sys/pf/dist/net/if_pflog.h Tue Aug 18 14:00:25 2009 (r196359) +++ vendor-sys/pf/dist/net/if_pflog.h Tue Aug 18 16:13:59 2009 (r196360) @@ -1,4 +1,4 @@ -/* $OpenBSD: if_pflog.h,v 1.14 2006/10/25 11:27:01 henning Exp $ */ +/* $OpenBSD: if_pflog.h,v 1.13 2006/10/23 12:46:09 henning Exp $ */ /* * Copyright 2001 Niels Provos * All rights reserved. Added: vendor-sys/pf/dist/net/if_pflow.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ vendor-sys/pf/dist/net/if_pflow.c Tue Aug 18 16:13:59 2009 (r196360) @@ -0,0 +1,621 @@ +/* $OpenBSD: if_pflow.c,v 1.9 2009/01/03 21:47:32 gollo Exp $ */ + +/* + * Copyright (c) 2008 Henning Brauer + * Copyright (c) 2008 Joerg Goltermann + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF MIND, USE, DATA OR PROFITS, WHETHER IN + * AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT + * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#ifdef INET +#include +#include +#include +#include +#include +#include +#include +#include +#endif /* INET */ + +#include +#include + +#include "bpfilter.h" +#include "pflow.h" + +#define PFLOW_MINMTU \ + (sizeof(struct pflow_header) + sizeof(struct pflow_flow)) + +#ifdef PFLOWDEBUG +#define DPRINTF(x) do { printf x ; } while (0) +#else +#define DPRINTF(x) +#endif + +SLIST_HEAD(, pflow_softc) pflowif_list; +struct pflowstats pflowstats; + +void pflowattach(int); +int pflow_clone_create(struct if_clone *, int); +int pflow_clone_destroy(struct ifnet *); +void pflow_setmtu(struct pflow_softc *, int); +int pflowoutput(struct ifnet *, struct mbuf *, struct sockaddr *, + struct rtentry *); +int pflowioctl(struct ifnet *, u_long, caddr_t); +void pflowstart(struct ifnet *); + +struct mbuf *pflow_get_mbuf(struct pflow_softc *); +int pflow_sendout(struct pflow_softc *); +int pflow_sendout_mbuf(struct pflow_softc *, struct mbuf *); +void pflow_timeout(void *); +void copy_flow_data(struct pflow_flow *, struct pflow_flow *, + struct pf_state *, int, int); +int pflow_pack_flow(struct pf_state *, struct pflow_softc *); +int pflow_get_dynport(void); +int export_pflow_if(struct pf_state*, struct pflow_softc *); +int copy_flow_to_m(struct pflow_flow *flow, struct pflow_softc *sc); + +struct if_clone pflow_cloner = + IF_CLONE_INITIALIZER("pflow", pflow_clone_create, + pflow_clone_destroy); + +/* from in_pcb.c */ +extern int ipport_hifirstauto; +extern int ipport_hilastauto; + +/* from kern/kern_clock.c; incremented each clock tick. */ +extern int ticks; + +void +pflowattach(int npflow) +{ + SLIST_INIT(&pflowif_list); + if_clone_attach(&pflow_cloner); +} + +int +pflow_clone_create(struct if_clone *ifc, int unit) +{ + struct ifnet *ifp; + struct pflow_softc *pflowif; + + if ((pflowif = malloc(sizeof(*pflowif), + M_DEVBUF, M_NOWAIT|M_ZERO)) == NULL) + return (ENOMEM); + + pflowif->sc_sender_ip.s_addr = INADDR_ANY; + pflowif->sc_sender_port = pflow_get_dynport(); + + pflowif->sc_imo.imo_membership = malloc( + (sizeof(struct in_multi *) * IP_MIN_MEMBERSHIPS), M_IPMOPTS, + M_WAITOK|M_ZERO); + pflowif->sc_imo.imo_max_memberships = IP_MIN_MEMBERSHIPS; + pflowif->sc_receiver_ip.s_addr = 0; + pflowif->sc_receiver_port = 0; + pflowif->sc_sender_ip.s_addr = INADDR_ANY; + pflowif->sc_sender_port = pflow_get_dynport(); + ifp = &pflowif->sc_if; + snprintf(ifp->if_xname, sizeof ifp->if_xname, "pflow%d", unit); + ifp->if_softc = pflowif; + ifp->if_ioctl = pflowioctl; + ifp->if_output = pflowoutput; + ifp->if_start = pflowstart; + ifp->if_type = IFT_PFLOW; + ifp->if_snd.ifq_maxlen = ifqmaxlen; + ifp->if_hdrlen = PFLOW_HDRLEN; + ifp->if_flags = IFF_UP; + ifp->if_flags &= ~IFF_RUNNING; /* not running, need receiver */ + pflow_setmtu(pflowif, ETHERMTU); + timeout_set(&pflowif->sc_tmo, pflow_timeout, pflowif); + if_attach(ifp); + if_alloc_sadl(ifp); + +#if NBPFILTER > 0 + bpfattach(&pflowif->sc_if.if_bpf, ifp, DLT_RAW, 0); +#endif + + /* Insert into list of pflows */ + SLIST_INSERT_HEAD(&pflowif_list, pflowif, sc_next); + return (0); +} + +int +pflow_clone_destroy(struct ifnet *ifp) +{ + struct pflow_softc *sc = ifp->if_softc; + int s; + + s = splnet(); + pflow_sendout(sc); +#if NBPFILTER > 0 + bpfdetach(ifp); +#endif + if_detach(ifp); + SLIST_REMOVE(&pflowif_list, sc, pflow_softc, sc_next); + free(sc->sc_imo.imo_membership, M_IPMOPTS); + free(sc, M_DEVBUF); + splx(s); + return (0); +} + +/* + * Start output on the pflow interface. + */ +void +pflowstart(struct ifnet *ifp) +{ + struct mbuf *m; + int s; + + for (;;) { + s = splnet(); + IF_DROP(&ifp->if_snd); + IF_DEQUEUE(&ifp->if_snd, m); + splx(s); + + if (m == NULL) + return; + m_freem(m); + } +} + +int +pflowoutput(struct ifnet *ifp, struct mbuf *m, struct sockaddr *dst, + struct rtentry *rt) +{ + m_freem(m); + return (0); +} + +/* ARGSUSED */ +int +pflowioctl(struct ifnet *ifp, u_long cmd, caddr_t data) +{ + struct proc *p = curproc; + struct pflow_softc *sc = ifp->if_softc; + struct ifreq *ifr = (struct ifreq *)data; + struct pflowreq pflowr; + int s, error; + + switch (cmd) { + case SIOCSIFADDR: + case SIOCAIFADDR: + case SIOCSIFDSTADDR: + case SIOCSIFFLAGS: + if ((ifp->if_flags & IFF_UP) && + sc->sc_receiver_ip.s_addr != 0 && + sc->sc_receiver_port != 0) { + ifp->if_flags |= IFF_RUNNING; + sc->sc_gcounter=pflowstats.pflow_flows; + } else + ifp->if_flags &= ~IFF_RUNNING; + break; + case SIOCSIFMTU: + if (ifr->ifr_mtu < PFLOW_MINMTU) + return (EINVAL); + if (ifr->ifr_mtu > MCLBYTES) + ifr->ifr_mtu = MCLBYTES; + s = splnet(); + if (ifr->ifr_mtu < ifp->if_mtu) + pflow_sendout(sc); + pflow_setmtu(sc, ifr->ifr_mtu); + splx(s); + break; + + case SIOCGETPFLOW: + bzero(&pflowr, sizeof(pflowr)); + + pflowr.sender_ip = sc->sc_sender_ip; + pflowr.receiver_ip = sc->sc_receiver_ip; + pflowr.receiver_port = sc->sc_receiver_port; + + if ((error = copyout(&pflowr, ifr->ifr_data, + sizeof(pflowr)))) + return (error); + break; + + case SIOCSETPFLOW: + if ((error = suser(p, p->p_acflag)) != 0) + return (error); + if ((error = copyin(ifr->ifr_data, &pflowr, + sizeof(pflowr)))) + return (error); + + s = splnet(); + pflow_sendout(sc); + splx(s); + + if (pflowr.addrmask & PFLOW_MASK_DSTIP) + sc->sc_receiver_ip = pflowr.receiver_ip; + if (pflowr.addrmask & PFLOW_MASK_DSTPRT) + sc->sc_receiver_port = pflowr.receiver_port; + if (pflowr.addrmask & PFLOW_MASK_SRCIP) + sc->sc_sender_ip.s_addr = pflowr.sender_ip.s_addr; + + if ((ifp->if_flags & IFF_UP) && + sc->sc_receiver_ip.s_addr != 0 && + sc->sc_receiver_port != 0) { + ifp->if_flags |= IFF_RUNNING; + sc->sc_gcounter=pflowstats.pflow_flows; + } else + ifp->if_flags &= ~IFF_RUNNING; + + break; + + default: + return (ENOTTY); + } + return (0); +} + +void +pflow_setmtu(struct pflow_softc *sc, int mtu_req) +{ + int mtu; + + if (sc->sc_pflow_ifp && sc->sc_pflow_ifp->if_mtu < mtu_req) + mtu = sc->sc_pflow_ifp->if_mtu; + else + mtu = mtu_req; + + sc->sc_maxcount = (mtu - sizeof(struct pflow_header) - + sizeof (struct udpiphdr)) / sizeof(struct pflow_flow); + if (sc->sc_maxcount > PFLOW_MAXFLOWS) + sc->sc_maxcount = PFLOW_MAXFLOWS; + sc->sc_if.if_mtu = sizeof(struct pflow_header) + + sizeof (struct udpiphdr) + + sc->sc_maxcount * sizeof(struct pflow_flow); +} + +struct mbuf * +pflow_get_mbuf(struct pflow_softc *sc) +{ + struct pflow_header h; + struct mbuf *m; + + MGETHDR(m, M_DONTWAIT, MT_DATA); + if (m == NULL) { + pflowstats.pflow_onomem++; + return (NULL); + } + + MCLGET(m, M_DONTWAIT); + if ((m->m_flags & M_EXT) == 0) { + m_free(m); + pflowstats.pflow_onomem++; + return (NULL); + } + + m->m_len = m->m_pkthdr.len = 0; + m->m_pkthdr.rcvif = NULL; + + /* populate pflow_header */ + h.reserved1 = 0; + h.reserved2 = 0; + h.count = 0; + h.version = htons(PFLOW_VERSION); + h.flow_sequence = htonl(sc->sc_gcounter); + h.engine_type = PFLOW_ENGINE_TYPE; + h.engine_id = PFLOW_ENGINE_ID; + m_copyback(m, 0, PFLOW_HDRLEN, &h); + + sc->sc_count = 0; + timeout_add_sec(&sc->sc_tmo, PFLOW_TIMEOUT); + return (m); +} + +void +copy_flow_data(struct pflow_flow *flow1, struct pflow_flow *flow2, + struct pf_state *st, int src, int dst) +{ + struct pf_state_key *sk = st->key[PF_SK_WIRE]; + + flow1->src_ip = flow2->dest_ip = sk->addr[src].v4.s_addr; + flow1->src_port = flow2->dest_port = sk->port[src]; + flow1->dest_ip = flow2->src_ip = sk->addr[dst].v4.s_addr; + flow1->dest_port = flow2->src_port = sk->port[dst]; + + flow1->dest_as = flow2->src_as = + flow1->src_as = flow2->dest_as = 0; + flow1->if_index_out = flow2->if_index_in = + flow1->if_index_in = flow2->if_index_out = 0; + flow1->dest_mask = flow2->src_mask = + flow1->src_mask = flow2->dest_mask = 0; + + flow1->flow_packets = htonl(st->packets[0]); + flow2->flow_packets = htonl(st->packets[1]); + flow1->flow_octets = htonl(st->bytes[0]); + flow2->flow_octets = htonl(st->bytes[1]); + + flow1->flow_start = flow2->flow_start = htonl(st->creation * 1000); + flow1->flow_finish = flow2->flow_finish = htonl(time_second * 1000); + flow1->tcp_flags = flow2->tcp_flags = 0; + flow1->protocol = flow2->protocol = sk->proto; + flow1->tos = flow2->tos = st->rule.ptr->tos; +} + +int +export_pflow(struct pf_state *st) +{ + struct pflow_softc *sc = NULL; + struct pf_state_key *sk = st->key[PF_SK_WIRE]; + + if (sk->af != AF_INET) + return (0); + + SLIST_FOREACH(sc, &pflowif_list, sc_next) { + export_pflow_if(st, sc); + } + + return (0); +} + +int +export_pflow_if(struct pf_state *st, struct pflow_softc *sc) +{ + struct pf_state pfs_copy; + struct ifnet *ifp = &sc->sc_if; + u_int64_t bytes[2]; + int ret = 0; + + if (!(ifp->if_flags & IFF_RUNNING)) + return (0); + + if ((st->bytes[0] < (u_int64_t)PFLOW_MAXBYTES) + && (st->bytes[1] < (u_int64_t)PFLOW_MAXBYTES)) + return (pflow_pack_flow(st, sc)); + + /* flow > PFLOW_MAXBYTES need special handling */ + bcopy(st, &pfs_copy, sizeof(pfs_copy)); + bytes[0] = pfs_copy.bytes[0]; + bytes[1] = pfs_copy.bytes[1]; + + while (bytes[0] > PFLOW_MAXBYTES) { + pfs_copy.bytes[0] = PFLOW_MAXBYTES; + pfs_copy.bytes[1] = 0; + + if ((ret = pflow_pack_flow(&pfs_copy, sc)) != 0) + return (ret); + if ((bytes[0] - PFLOW_MAXBYTES) > 0) + bytes[0] -= PFLOW_MAXBYTES; + } + + while (bytes[1] > (u_int64_t)PFLOW_MAXBYTES) { + pfs_copy.bytes[1] = PFLOW_MAXBYTES; + pfs_copy.bytes[0] = 0; + + if ((ret = pflow_pack_flow(&pfs_copy, sc)) != 0) + return (ret); + if ((bytes[1] - PFLOW_MAXBYTES) > 0) + bytes[1] -= PFLOW_MAXBYTES; + } + + pfs_copy.bytes[0] = bytes[0]; + pfs_copy.bytes[1] = bytes[1]; + + return (pflow_pack_flow(&pfs_copy, sc)); +} + +int +copy_flow_to_m(struct pflow_flow *flow, struct pflow_softc *sc) +{ + int s, ret = 0; + + s = splnet(); + if (sc->sc_mbuf == NULL) { + if ((sc->sc_mbuf = pflow_get_mbuf(sc)) == NULL) { + splx(s); + return (ENOBUFS); + } + } + m_copyback(sc->sc_mbuf, PFLOW_HDRLEN + + (sc->sc_count * sizeof (struct pflow_flow)), + sizeof (struct pflow_flow), flow); + + if (pflowstats.pflow_flows == sc->sc_gcounter) + pflowstats.pflow_flows++; + sc->sc_gcounter++; + sc->sc_count++; + + if (sc->sc_count >= sc->sc_maxcount) + ret = pflow_sendout(sc); + + splx(s); + return(ret); +} + +int +pflow_pack_flow(struct pf_state *st, struct pflow_softc *sc) +{ + struct pflow_flow flow1; + struct pflow_flow flow2; + int ret = 0; + + bzero(&flow1, sizeof(flow1)); + bzero(&flow2, sizeof(flow2)); + + if (st->direction == PF_OUT) + copy_flow_data(&flow1, &flow2, st, 1, 0); + else + copy_flow_data(&flow1, &flow2, st, 0, 1); + + if (st->bytes[0] != 0) /* first flow from state */ + ret = copy_flow_to_m(&flow1, sc); + + if (st->bytes[1] != 0) /* second flow from state */ + ret = copy_flow_to_m(&flow2, sc); + + return (ret); +} + +void +pflow_timeout(void *v) +{ + struct pflow_softc *sc = v; + int s; + + s = splnet(); + pflow_sendout(sc); + splx(s); +} + +/* This must be called in splnet() */ +int +pflow_sendout(struct pflow_softc *sc) +{ + struct mbuf *m = sc->sc_mbuf; + struct pflow_header *h; + struct ifnet *ifp = &sc->sc_if; + + timeout_del(&sc->sc_tmo); + + if (m == NULL) + return (0); + + sc->sc_mbuf = NULL; + if (!(ifp->if_flags & IFF_RUNNING)) { + m_freem(m); + return (0); + } + + pflowstats.pflow_packets++; + h = mtod(m, struct pflow_header *); + h->count = htons(sc->sc_count); + + /* populate pflow_header */ + h->uptime_ms = htonl(time_uptime * 1000); + h->time_sec = htonl(time_second); + h->time_nanosec = htonl(ticks); + + return (pflow_sendout_mbuf(sc, m)); +} + +int +pflow_sendout_mbuf(struct pflow_softc *sc, struct mbuf *m) +{ + struct udpiphdr *ui; + u_int16_t len = m->m_pkthdr.len; + struct ifnet *ifp = &sc->sc_if; + struct ip *ip; + int err; + + /* UDP Header*/ + M_PREPEND(m, sizeof(struct udpiphdr), M_DONTWAIT); + if (m == NULL) { + pflowstats.pflow_onomem++; + return (ENOBUFS); + } + + ui = mtod(m, struct udpiphdr *); + ui->ui_pr = IPPROTO_UDP; + ui->ui_src = sc->sc_sender_ip; + ui->ui_sport = sc->sc_sender_port; + ui->ui_dst = sc->sc_receiver_ip; + ui->ui_dport = sc->sc_receiver_port; + ui->ui_ulen = htons(sizeof (struct udphdr) + len); + + ip = (struct ip *)ui; + ip->ip_v = IPVERSION; + ip->ip_hl = sizeof(struct ip) >> 2; + ip->ip_id = htons(ip_randomid()); + ip->ip_off = htons(IP_DF); + ip->ip_tos = IPTOS_LOWDELAY; + ip->ip_ttl = IPDEFTTL; + ip->ip_len = htons(sizeof (struct udpiphdr) + len); + + /* + * Compute the pseudo-header checksum; defer further checksumming + * until ip_output() or hardware (if it exists). + */ + m->m_pkthdr.csum_flags |= M_UDPV4_CSUM_OUT; + ui->ui_sum = in_cksum_phdr(ui->ui_src.s_addr, + ui->ui_dst.s_addr, htons(len + sizeof(struct udphdr) + + IPPROTO_UDP)); + +#if NBPFILTER > 0 + if (ifp->if_bpf) { + ip->ip_sum = in_cksum(m, ip->ip_hl << 2); + bpf_mtap(ifp->if_bpf, m, BPF_DIRECTION_OUT); + } +#endif + + sc->sc_if.if_opackets++; + sc->sc_if.if_obytes += m->m_pkthdr.len; + + if ((err = ip_output(m, NULL, NULL, IP_RAWOUTPUT, &sc->sc_imo, NULL))) { + pflowstats.pflow_oerrors++; + sc->sc_if.if_oerrors++; + } + return (err); +} + +int +pflow_get_dynport(void) +{ + u_int16_t tmp, low, high, cut; + + low = ipport_hifirstauto; /* sysctl */ + high = ipport_hilastauto; + + cut = arc4random_uniform(1 + high - low) + low; + + for (tmp = cut; tmp <= high; ++(tmp)) { + if (!in_baddynamic(tmp, IPPROTO_UDP)) + return (htons(tmp)); + } + + for (tmp = cut - 1; tmp >= low; --(tmp)) { + if (!in_baddynamic(tmp, IPPROTO_UDP)) + return (htons(tmp)); + } + + return (htons(ipport_hilastauto)); /* XXX */ +} + +int +pflow_sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, + void *newp, size_t newlen) +{ + if (namelen != 1) + return (ENOTDIR); + + switch (name[0]) { + case NET_PFLOW_STATS: + if (newp != NULL) + return (EPERM); + return (sysctl_struct(oldp, oldlenp, newp, newlen, + &pflowstats, sizeof(pflowstats))); + default: + return (EOPNOTSUPP); + } + return (0); +} Added: vendor-sys/pf/dist/net/if_pflow.h ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ vendor-sys/pf/dist/net/if_pflow.h Tue Aug 18 16:13:59 2009 (r196360) @@ -0,0 +1,120 @@ +/* $OpenBSD: if_pflow.h,v 1.4 2009/01/03 21:47:32 gollo Exp $ */ + +/* + * Copyright (c) 2008 Henning Brauer + * Copyright (c) 2008 Joerg Goltermann + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF MIND, USE, DATA OR PROFITS, WHETHER IN + * AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT + * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#ifndef _NET_IF_PFLOW_H_ +#define _NET_IF_PFLOW_H_ + +#define PFLOW_ID_LEN sizeof(u_int64_t) + +#define PFLOW_MAXFLOWS 30 +#define PFLOW_VERSION 5 +#define PFLOW_ENGINE_TYPE 42 +#define PFLOW_ENGINE_ID 42 +#define PFLOW_MAXBYTES 0xffffffff +#define PFLOW_TIMEOUT 30 + +struct pflow_flow { + u_int32_t src_ip; + u_int32_t dest_ip; + u_int32_t nexthop_ip; + u_int16_t if_index_in; + u_int16_t if_index_out; + u_int32_t flow_packets; + u_int32_t flow_octets; + u_int32_t flow_start; + u_int32_t flow_finish; + u_int16_t src_port; + u_int16_t dest_port; + u_int8_t pad1; + u_int8_t tcp_flags; + u_int8_t protocol; + u_int8_t tos; + u_int16_t src_as; + u_int16_t dest_as; + u_int8_t src_mask; + u_int8_t dest_mask; + u_int16_t pad2; +} __packed; + +#ifdef _KERNEL + +extern int pflow_ok; + +struct pflow_softc { + struct ifnet sc_if; + struct ifnet *sc_pflow_ifp; + + unsigned int sc_count; + unsigned int sc_maxcount; + u_int64_t sc_gcounter; + struct ip_moptions sc_imo; + struct timeout sc_tmo; + struct in_addr sc_sender_ip; + u_int16_t sc_sender_port; + struct in_addr sc_receiver_ip; + u_int16_t sc_receiver_port; + struct mbuf *sc_mbuf; /* current cumulative mbuf */ + SLIST_ENTRY(pflow_softc) sc_next; +}; + +extern struct pflow_softc *pflowif; + +#endif /* _KERNEL */ + +struct pflow_header { + u_int16_t version; + u_int16_t count; + u_int32_t uptime_ms; + u_int32_t time_sec; + u_int32_t time_nanosec; + u_int32_t flow_sequence; + u_int8_t engine_type; + u_int8_t engine_id; + u_int8_t reserved1; + u_int8_t reserved2; +} __packed; + +#define PFLOW_HDRLEN sizeof(struct pflow_header) + +struct pflowstats { + u_int64_t pflow_flows; + u_int64_t pflow_packets; + u_int64_t pflow_onomem; + u_int64_t pflow_oerrors; +}; + +/* + * Configuration structure for SIOCSETPFLOW SIOCGETPFLOW + */ +struct pflowreq { + struct in_addr sender_ip; + struct in_addr receiver_ip; + u_int16_t receiver_port; + u_int16_t addrmask; +#define PFLOW_MASK_SRCIP 0x01 +#define PFLOW_MASK_DSTIP 0x02 +#define PFLOW_MASK_DSTPRT 0x04 +}; + +#ifdef _KERNEL +int export_pflow(struct pf_state *); +int pflow_sysctl(int *, u_int, void *, size_t *, void *, size_t); +#endif /* _KERNEL */ + +#endif /* _NET_IF_PFLOW_H_ */ Modified: vendor-sys/pf/dist/net/if_pfsync.c ============================================================================== --- vendor-sys/pf/dist/net/if_pfsync.c Tue Aug 18 14:00:25 2009 (r196359) +++ vendor-sys/pf/dist/net/if_pfsync.c Tue Aug 18 16:13:59 2009 (r196360) @@ -1,4 +1,4 @@ -/* $OpenBSD: if_pfsync.c,v 1.98 2008/06/29 08:42:15 mcbride Exp $ */ +/* $OpenBSD: if_pfsync.c,v 1.110 2009/02/24 05:39:19 dlg Exp $ */ /* * Copyright (c) 2002 Michael Shalayeff @@ -26,6 +26,21 @@ * THE POSSIBILITY OF SUCH DAMAGE. */ +/* + * Copyright (c) 2009 David Gwynne + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ #include #include @@ -37,16 +52,17 @@ #include #include #include +#include #include #include #include #include +#include #include #include #include #include -#include #ifdef INET #include @@ -70,15 +86,132 @@ #include "bpfilter.h" #include "pfsync.h" -#define PFSYNC_MINMTU \ - (sizeof(struct pfsync_header) + sizeof(struct pf_state)) +#define PFSYNC_MINPKT ( \ + sizeof(struct ip) + \ + sizeof(struct pfsync_header) + \ + sizeof(struct pfsync_subheader) + \ + sizeof(struct pfsync_eof)) -#ifdef PFSYNCDEBUG -#define DPRINTF(x) do { if (pfsyncdebug) printf x ; } while (0) -int pfsyncdebug; -#else -#define DPRINTF(x) -#endif +struct pfsync_pkt { + struct ip *ip; + struct in_addr src; + u_int8_t flags; +}; + +int pfsync_input_hmac(struct mbuf *, int); + +int pfsync_upd_tcp(struct pf_state *, struct pfsync_state_peer *, + struct pfsync_state_peer *); + +int pfsync_in_clr(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_ins(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_iack(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_upd(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_upd_c(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_ureq(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_del(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_del_c(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_bus(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_tdb(struct pfsync_pkt *, struct mbuf *, int, int); +int pfsync_in_eof(struct pfsync_pkt *, struct mbuf *, int, int); + +int pfsync_in_error(struct pfsync_pkt *, struct mbuf *, int, int); + +int (*pfsync_acts[])(struct pfsync_pkt *, struct mbuf *, int, int) = { + pfsync_in_clr, /* PFSYNC_ACT_CLR */ + pfsync_in_ins, /* PFSYNC_ACT_INS */ + pfsync_in_iack, /* PFSYNC_ACT_INS_ACK */ + pfsync_in_upd, /* PFSYNC_ACT_UPD */ + pfsync_in_upd_c, /* PFSYNC_ACT_UPD_C */ + pfsync_in_ureq, /* PFSYNC_ACT_UPD_REQ */ + pfsync_in_del, /* PFSYNC_ACT_DEL */ + pfsync_in_del_c, /* PFSYNC_ACT_DEL_C */ + pfsync_in_error, /* PFSYNC_ACT_INS_F */ + pfsync_in_error, /* PFSYNC_ACT_DEL_F */ + pfsync_in_bus, /* PFSYNC_ACT_BUS */ + pfsync_in_tdb, /* PFSYNC_ACT_TDB */ + pfsync_in_eof /* PFSYNC_ACT_EOF */ +}; + +struct pfsync_q { + int (*write)(struct pf_state *, struct mbuf *, int); + size_t len; + u_int8_t action; +}; + +/* we have one of these for every PFSYNC_S_ */ +int pfsync_out_state(struct pf_state *, struct mbuf *, int); +int pfsync_out_iack(struct pf_state *, struct mbuf *, int); +int pfsync_out_upd_c(struct pf_state *, struct mbuf *, int); +int pfsync_out_del(struct pf_state *, struct mbuf *, int); + +struct pfsync_q pfsync_qs[] = { + { pfsync_out_state, sizeof(struct pfsync_state), PFSYNC_ACT_INS }, + { pfsync_out_iack, sizeof(struct pfsync_ins_ack), PFSYNC_ACT_INS_ACK }, + { pfsync_out_state, sizeof(struct pfsync_state), PFSYNC_ACT_UPD }, + { pfsync_out_upd_c, sizeof(struct pfsync_upd_c), PFSYNC_ACT_UPD_C }, + { pfsync_out_del, sizeof(struct pfsync_del_c), PFSYNC_ACT_DEL_C } +}; + +void pfsync_q_ins(struct pf_state *, int); +void pfsync_q_del(struct pf_state *); + +struct pfsync_upd_req_item { + TAILQ_ENTRY(pfsync_upd_req_item) ur_entry; + struct pfsync_upd_req ur_msg; +}; +TAILQ_HEAD(pfsync_upd_reqs, pfsync_upd_req_item); + +struct pfsync_deferral { + TAILQ_ENTRY(pfsync_deferral) pd_entry; + struct pf_state *pd_st; + struct mbuf *pd_m; + struct timeout pd_tmo; +}; +TAILQ_HEAD(pfsync_deferrals, pfsync_deferral); + +#define PFSYNC_PLSIZE MAX(sizeof(struct pfsync_upd_req_item), \ + sizeof(struct pfsync_deferral)) + +int pfsync_out_tdb(struct tdb *, struct mbuf *, int); + +struct pfsync_softc { + struct ifnet sc_if; + struct ifnet *sc_sync_if; + + struct pool sc_pool; + + struct ip_moptions sc_imo; + + struct in_addr sc_sync_peer; + u_int8_t sc_maxupdates; + + struct ip sc_template; + + struct pf_state_queue sc_qs[PFSYNC_S_COUNT]; + size_t sc_len; + + struct pfsync_upd_reqs sc_upd_req_list; + + struct pfsync_deferrals sc_deferrals; + u_int sc_deferred; + + void *sc_plus; + size_t sc_pluslen; + + u_int32_t sc_ureq_sent; + int sc_bulk_tries; + struct timeout sc_bulkfail_tmo; + + u_int32_t sc_ureq_received; + struct pf_state *sc_bulk_next; + struct pf_state *sc_bulk_last; + struct timeout sc_bulk_tmo; + + TAILQ_HEAD(, tdb) sc_tdb_q; + + struct timeout sc_tmo; +}; struct pfsync_softc *pfsyncif = NULL; struct pfsyncstats pfsyncstats; @@ -86,7 +219,6 @@ struct pfsyncstats pfsyncstats; void pfsyncattach(int); int pfsync_clone_create(struct if_clone *, int); int pfsync_clone_destroy(struct ifnet *); -void pfsync_setmtu(struct pfsync_softc *, int); int pfsync_alloc_scrub_memory(struct pfsync_state_peer *, struct pf_state_peer *); void pfsync_update_net_tdb(struct pfsync_tdb *); @@ -95,17 +227,31 @@ int pfsyncoutput(struct ifnet *, struct int pfsyncioctl(struct ifnet *, u_long, caddr_t); void pfsyncstart(struct ifnet *); -struct mbuf *pfsync_get_mbuf(struct pfsync_softc *, u_int8_t, void **); -int pfsync_request_update(struct pfsync_state_upd *, struct in_addr *); -int pfsync_sendout(struct pfsync_softc *); +struct mbuf *pfsync_if_dequeue(struct ifnet *); +struct mbuf *pfsync_get_mbuf(struct pfsync_softc *); + +void pfsync_deferred(struct pf_state *, int); +void pfsync_undefer(struct pfsync_deferral *, int); +void pfsync_defer_tmo(void *); + +void pfsync_request_update(u_int32_t, u_int64_t); +void pfsync_update_state_req(struct pf_state *); + +void pfsync_drop(struct pfsync_softc *); +void pfsync_sendout(void); +void pfsync_send_plus(void *, size_t); int pfsync_tdb_sendout(struct pfsync_softc *); int pfsync_sendout_mbuf(struct pfsync_softc *, struct mbuf *); void pfsync_timeout(void *); void pfsync_tdb_timeout(void *); void pfsync_send_bus(struct pfsync_softc *, u_int8_t); + +void pfsync_bulk_start(void); +void pfsync_bulk_status(u_int8_t); void pfsync_bulk_update(void *); -void pfsync_bulkfail(void *); +void pfsync_bulk_fail(void *); +#define PFSYNC_MAX_BULKTRIES 12 int pfsync_sync_ok; struct if_clone pfsync_cloner = @@ -119,46 +265,52 @@ pfsyncattach(int npfsync) int pfsync_clone_create(struct if_clone *ifc, int unit) *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***