From owner-svn-src-head@freebsd.org Wed May 18 04:36:00 2016 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7D8BCB4058C; Wed, 18 May 2016 04:36:00 +0000 (UTC) (envelope-from scottl@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4F11A1A8C; Wed, 18 May 2016 04:36:00 +0000 (UTC) (envelope-from scottl@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id u4I4ZxpA025106; Wed, 18 May 2016 04:35:59 GMT (envelope-from scottl@FreeBSD.org) Received: (from scottl@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id u4I4ZwYh025096; Wed, 18 May 2016 04:35:58 GMT (envelope-from scottl@FreeBSD.org) Message-Id: <201605180435.u4I4ZwYh025096@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: scottl set sender to scottl@FreeBSD.org using -f From: Scott Long Date: Wed, 18 May 2016 04:35:58 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r300113 - in head/sys: conf kern net sys X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 May 2016 04:36:00 -0000 Author: scottl Date: Wed May 18 04:35:58 2016 New Revision: 300113 URL: https://svnweb.freebsd.org/changeset/base/300113 Log: Import the 'iflib' API library for network drivers. From the author: "iflib is a library to eliminate the need for frequently duplicated device independent logic propagated (poorly) across many network drivers." Participation is purely optional. The IFLIB kernel config option is provided for drivers that want to transition between legacy and iflib modes of operation. ixl and ixgbe driver conversions will be committed shortly. We hope to see participation from the Broadcom and maybe Chelsio drivers in the near future. Submitted by: mmacy@nextbsd.org Reviewed by: gallatin Differential Revision: D5211 Added: head/sys/net/ifdi_if.m (contents, props changed) head/sys/net/iflib.c (contents, props changed) head/sys/net/iflib.h (contents, props changed) head/sys/net/mp_ring.c (contents, props changed) head/sys/net/mp_ring.h (contents, props changed) Modified: head/sys/conf/files head/sys/conf/options head/sys/kern/device_if.m head/sys/kern/kern_mbuf.c head/sys/kern/subr_taskqueue.c head/sys/net/if.c head/sys/net/if_var.h head/sys/sys/_task.h head/sys/sys/mbuf.h head/sys/sys/taskqueue.h Modified: head/sys/conf/files ============================================================================== --- head/sys/conf/files Wed May 18 04:04:14 2016 (r300112) +++ head/sys/conf/files Wed May 18 04:35:58 2016 (r300113) @@ -3523,6 +3523,9 @@ net/if_tun.c optional tun net/if_tap.c optional tap net/if_vlan.c optional vlan net/if_vxlan.c optional vxlan inet | vxlan inet6 +net/ifdi_if.m optional ether pci +net/iflib.c optional ether pci +net/mp_ring.c optional ether net/mppcc.c optional netgraph_mppc_compression net/mppcd.c optional netgraph_mppc_compression net/netisr.c standard Modified: head/sys/conf/options ============================================================================== --- head/sys/conf/options Wed May 18 04:04:14 2016 (r300112) +++ head/sys/conf/options Wed May 18 04:35:58 2016 (r300113) @@ -139,6 +139,7 @@ GEOM_VINUM opt_geom.h GEOM_VIRSTOR opt_geom.h GEOM_VOL opt_geom.h GEOM_ZERO opt_geom.h +IFLIB opt_iflib.h KDTRACE_HOOKS opt_global.h KDTRACE_FRAME opt_kdtrace.h KN_HASHSIZE opt_kqueue.h Modified: head/sys/kern/device_if.m ============================================================================== --- head/sys/kern/device_if.m Wed May 18 04:04:14 2016 (r300112) +++ head/sys/kern/device_if.m Wed May 18 04:35:58 2016 (r300113) @@ -62,6 +62,11 @@ CODE { { return 0; } + + static void * null_register(device_t dev) + { + return NULL; + } }; /** @@ -316,3 +321,24 @@ METHOD int resume { METHOD int quiesce { device_t dev; } DEFAULT null_quiesce; + +/** + * @brief This is called when the driver is asked to register handlers. + * + * + * To include this method in a device driver, use a line like this + * in the driver's method list: + * + * @code + * KOBJMETHOD(device_register, foo_register) + * @endcode + * + * @param dev the device for which handlers are being registered + * + * @retval NULL method not implemented + * @retval non-NULL a pointer to implementation specific static driver state + * + */ +METHOD void * register { + device_t dev; +} DEFAULT null_register; Modified: head/sys/kern/kern_mbuf.c ============================================================================== --- head/sys/kern/kern_mbuf.c Wed May 18 04:04:14 2016 (r300112) +++ head/sys/kern/kern_mbuf.c Wed May 18 04:35:58 2016 (r300113) @@ -444,7 +444,7 @@ mb_dtor_mbuf(void *mem, int size, void * flags = (unsigned long)arg; KASSERT((m->m_flags & M_NOFREE) == 0, ("%s: M_NOFREE set", __func__)); - if ((m->m_flags & M_PKTHDR) && !SLIST_EMPTY(&m->m_pkthdr.tags)) + if (!(flags & MB_DTOR_SKIP) && (m->m_flags & M_PKTHDR) && !SLIST_EMPTY(&m->m_pkthdr.tags)) m_tag_delete_chain(m, NULL); #ifdef INVARIANTS trash_dtor(mem, size, arg); Modified: head/sys/kern/subr_taskqueue.c ============================================================================== --- head/sys/kern/subr_taskqueue.c Wed May 18 04:04:14 2016 (r300112) +++ head/sys/kern/subr_taskqueue.c Wed May 18 04:35:58 2016 (r300113) @@ -34,12 +34,14 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include #include #include #include +#include #include #include #include @@ -62,9 +64,11 @@ struct taskqueue { STAILQ_HEAD(, task) tq_queue; taskqueue_enqueue_fn tq_enqueue; void *tq_context; + char *tq_name; TAILQ_HEAD(, taskqueue_busy) tq_active; struct mtx tq_mutex; struct thread **tq_threads; + struct thread *tq_curthread; int tq_tcount; int tq_spin; int tq_flags; @@ -119,11 +123,17 @@ TQ_SLEEP(struct taskqueue *tq, void *p, } static struct taskqueue * -_taskqueue_create(const char *name __unused, int mflags, +_taskqueue_create(const char *name, int mflags, taskqueue_enqueue_fn enqueue, void *context, - int mtxflags, const char *mtxname) + int mtxflags, const char *mtxname __unused) { struct taskqueue *queue; + char *tq_name = NULL; + + if (name != NULL) + tq_name = strndup(name, 32, M_TASKQUEUE); + if (tq_name == NULL) + tq_name = "taskqueue"; queue = malloc(sizeof(struct taskqueue), M_TASKQUEUE, mflags | M_ZERO); if (!queue) @@ -133,6 +143,7 @@ _taskqueue_create(const char *name __unu TAILQ_INIT(&queue->tq_active); queue->tq_enqueue = enqueue; queue->tq_context = context; + queue->tq_name = tq_name; queue->tq_spin = (mtxflags & MTX_SPIN) != 0; queue->tq_flags |= TQ_FLAGS_ACTIVE; if (enqueue == taskqueue_fast_enqueue || @@ -140,7 +151,7 @@ _taskqueue_create(const char *name __unu enqueue == taskqueue_swi_giant_enqueue || enqueue == taskqueue_thread_enqueue) queue->tq_flags |= TQ_FLAGS_UNLOCKED_ENQUEUE; - mtx_init(&queue->tq_mutex, mtxname, NULL, mtxflags); + mtx_init(&queue->tq_mutex, tq_name, NULL, mtxflags); return queue; } @@ -149,8 +160,9 @@ struct taskqueue * taskqueue_create(const char *name, int mflags, taskqueue_enqueue_fn enqueue, void *context) { + return _taskqueue_create(name, mflags, enqueue, context, - MTX_DEF, "taskqueue"); + MTX_DEF, name); } void @@ -194,6 +206,7 @@ taskqueue_free(struct taskqueue *queue) KASSERT(queue->tq_callouts == 0, ("Armed timeout tasks")); mtx_destroy(&queue->tq_mutex); free(queue->tq_threads, M_TASKQUEUE); + free(queue->tq_name, M_TASKQUEUE); free(queue, M_TASKQUEUE); } @@ -203,11 +216,12 @@ taskqueue_enqueue_locked(struct taskqueu struct task *ins; struct task *prev; + KASSERT(task->ta_func != NULL, ("enqueueing task with NULL func")); /* * Count multiple enqueues. */ if (task->ta_pending) { - if (task->ta_pending < USHRT_MAX) + if (task->ta_pending < UCHAR_MAX) task->ta_pending++; TQ_UNLOCK(queue); return (0); @@ -245,6 +259,22 @@ taskqueue_enqueue_locked(struct taskqueu } int +grouptaskqueue_enqueue(struct taskqueue *queue, struct task *task) +{ + TQ_LOCK(queue); + if (task->ta_pending) { + TQ_UNLOCK(queue); + return (0); + } + STAILQ_INSERT_TAIL(&queue->tq_queue, task, ta_link); + task->ta_pending = 1; + TQ_UNLOCK(queue); + if ((queue->tq_flags & TQ_FLAGS_BLOCKED) == 0) + queue->tq_enqueue(queue->tq_context); + return (0); +} + +int taskqueue_enqueue(struct taskqueue *queue, struct task *task) { int res; @@ -410,6 +440,7 @@ taskqueue_run_locked(struct taskqueue *q struct task *task; int pending; + KASSERT(queue != NULL, ("tq is NULL")); TQ_ASSERT_LOCKED(queue); tb.tb_running = NULL; @@ -421,17 +452,20 @@ taskqueue_run_locked(struct taskqueue *q * zero its pending count. */ task = STAILQ_FIRST(&queue->tq_queue); + KASSERT(task != NULL, ("task is NULL")); STAILQ_REMOVE_HEAD(&queue->tq_queue, ta_link); pending = task->ta_pending; task->ta_pending = 0; tb.tb_running = task; TQ_UNLOCK(queue); + KASSERT(task->ta_func != NULL, ("task->ta_func is NULL")); task->ta_func(task->ta_context, pending); TQ_LOCK(queue); tb.tb_running = NULL; - wakeup(task); + if ((task->ta_flags & TASK_SKIP_WAKEUP) == 0) + wakeup(task); TAILQ_REMOVE(&queue->tq_active, &tb, tb_link); tb_first = TAILQ_FIRST(&queue->tq_active); @@ -446,7 +480,9 @@ taskqueue_run(struct taskqueue *queue) { TQ_LOCK(queue); + queue->tq_curthread = curthread; taskqueue_run_locked(queue); + queue->tq_curthread = NULL; TQ_UNLOCK(queue); } @@ -679,7 +715,9 @@ taskqueue_thread_loop(void *arg) tq = *tqp; taskqueue_run_callback(tq, TASKQUEUE_CALLBACK_TYPE_INIT); TQ_LOCK(tq); + tq->tq_curthread = curthread; while ((tq->tq_flags & TQ_FLAGS_ACTIVE) != 0) { + /* XXX ? */ taskqueue_run_locked(tq); /* * Because taskqueue_run() can drop tq_mutex, we need to @@ -691,7 +729,7 @@ taskqueue_thread_loop(void *arg) TQ_SLEEP(tq, tq, &tq->tq_mutex, 0, "-", 0); } taskqueue_run_locked(tq); - + tq->tq_curthread = NULL; /* * This thread is on its way out, so just drop the lock temporarily * in order to call the shutdown callback. This allows the callback @@ -715,8 +753,8 @@ taskqueue_thread_enqueue(void *context) tqp = context; tq = *tqp; - - wakeup_one(tq); + if (tq->tq_curthread != curthread) + wakeup_one(tq); } TASKQUEUE_DEFINE(swi, taskqueue_swi_enqueue, NULL, @@ -772,3 +810,334 @@ taskqueue_member(struct taskqueue *queue } return (ret); } + +struct taskqgroup_cpu { + LIST_HEAD(, grouptask) tgc_tasks; + struct taskqueue *tgc_taskq; + int tgc_cnt; + int tgc_cpu; +}; + +struct taskqgroup { + struct taskqgroup_cpu tqg_queue[MAXCPU]; + struct mtx tqg_lock; + char * tqg_name; + int tqg_adjusting; + int tqg_stride; + int tqg_cnt; +}; + +struct taskq_bind_task { + struct task bt_task; + int bt_cpuid; +}; + +static void +taskqgroup_cpu_create(struct taskqgroup *qgroup, int idx) +{ + struct taskqgroup_cpu *qcpu; + + qcpu = &qgroup->tqg_queue[idx]; + LIST_INIT(&qcpu->tgc_tasks); + qcpu->tgc_taskq = taskqueue_create_fast(NULL, M_WAITOK, + taskqueue_thread_enqueue, &qcpu->tgc_taskq); + taskqueue_start_threads(&qcpu->tgc_taskq, 1, PI_SOFT, + "%s_%d", qgroup->tqg_name, idx); + qcpu->tgc_cpu = idx * qgroup->tqg_stride; +} + +static void +taskqgroup_cpu_remove(struct taskqgroup *qgroup, int idx) +{ + + taskqueue_free(qgroup->tqg_queue[idx].tgc_taskq); +} + +/* + * Find the taskq with least # of tasks that doesn't currently have any + * other queues from the uniq identifier. + */ +static int +taskqgroup_find(struct taskqgroup *qgroup, void *uniq) +{ + struct grouptask *n; + int i, idx, mincnt; + int strict; + + mtx_assert(&qgroup->tqg_lock, MA_OWNED); + if (qgroup->tqg_cnt == 0) + return (0); + idx = -1; + mincnt = INT_MAX; + /* + * Two passes; First scan for a queue with the least tasks that + * does not already service this uniq id. If that fails simply find + * the queue with the least total tasks; + */ + for (strict = 1; mincnt == INT_MAX; strict = 0) { + for (i = 0; i < qgroup->tqg_cnt; i++) { + if (qgroup->tqg_queue[i].tgc_cnt > mincnt) + continue; + if (strict) { + LIST_FOREACH(n, + &qgroup->tqg_queue[i].tgc_tasks, gt_list) + if (n->gt_uniq == uniq) + break; + if (n != NULL) + continue; + } + mincnt = qgroup->tqg_queue[i].tgc_cnt; + idx = i; + } + } + if (idx == -1) + panic("taskqgroup_find: Failed to pick a qid."); + + return (idx); +} + +void +taskqgroup_attach(struct taskqgroup *qgroup, struct grouptask *gtask, + void *uniq, int irq, char *name) +{ + cpuset_t mask; + int qid; + + gtask->gt_uniq = uniq; + gtask->gt_name = name; + gtask->gt_irq = irq; + gtask->gt_cpu = -1; + mtx_lock(&qgroup->tqg_lock); + qid = taskqgroup_find(qgroup, uniq); + qgroup->tqg_queue[qid].tgc_cnt++; + LIST_INSERT_HEAD(&qgroup->tqg_queue[qid].tgc_tasks, gtask, gt_list); + gtask->gt_taskqueue = qgroup->tqg_queue[qid].tgc_taskq; + if (irq != -1 && smp_started) { + CPU_ZERO(&mask); + CPU_SET(qgroup->tqg_queue[qid].tgc_cpu, &mask); + mtx_unlock(&qgroup->tqg_lock); + intr_setaffinity(irq, &mask); + } else + mtx_unlock(&qgroup->tqg_lock); +} + +int +taskqgroup_attach_cpu(struct taskqgroup *qgroup, struct grouptask *gtask, + void *uniq, int cpu, int irq, char *name) +{ + cpuset_t mask; + int i, qid; + + qid = -1; + gtask->gt_uniq = uniq; + gtask->gt_name = name; + gtask->gt_irq = irq; + gtask->gt_cpu = cpu; + mtx_lock(&qgroup->tqg_lock); + if (smp_started) { + for (i = 0; i < qgroup->tqg_cnt; i++) + if (qgroup->tqg_queue[i].tgc_cpu == cpu) { + qid = i; + break; + } + if (qid == -1) { + mtx_unlock(&qgroup->tqg_lock); + return (EINVAL); + } + } else + qid = 0; + qgroup->tqg_queue[qid].tgc_cnt++; + LIST_INSERT_HEAD(&qgroup->tqg_queue[qid].tgc_tasks, gtask, gt_list); + gtask->gt_taskqueue = qgroup->tqg_queue[qid].tgc_taskq; + if (irq != -1 && smp_started) { + CPU_ZERO(&mask); + CPU_SET(qgroup->tqg_queue[qid].tgc_cpu, &mask); + mtx_unlock(&qgroup->tqg_lock); + intr_setaffinity(irq, &mask); + } else + mtx_unlock(&qgroup->tqg_lock); + return (0); +} + +void +taskqgroup_detach(struct taskqgroup *qgroup, struct grouptask *gtask) +{ + int i; + + mtx_lock(&qgroup->tqg_lock); + for (i = 0; i < qgroup->tqg_cnt; i++) + if (qgroup->tqg_queue[i].tgc_taskq == gtask->gt_taskqueue) + break; + if (i == qgroup->tqg_cnt) + panic("taskqgroup_detach: task not in group\n"); + qgroup->tqg_queue[i].tgc_cnt--; + LIST_REMOVE(gtask, gt_list); + mtx_unlock(&qgroup->tqg_lock); + gtask->gt_taskqueue = NULL; +} + +static void +taskqgroup_binder(void *ctx, int pending) +{ + struct taskq_bind_task *task = (struct taskq_bind_task *)ctx; + cpuset_t mask; + int error; + + CPU_ZERO(&mask); + CPU_SET(task->bt_cpuid, &mask); + error = cpuset_setthread(curthread->td_tid, &mask); + thread_lock(curthread); + sched_bind(curthread, task->bt_cpuid); + thread_unlock(curthread); + + if (error) + printf("taskqgroup_binder: setaffinity failed: %d\n", + error); + free(task, M_DEVBUF); +} + +static void +taskqgroup_bind(struct taskqgroup *qgroup) +{ + struct taskq_bind_task *task; + int i; + + /* + * Bind taskqueue threads to specific CPUs, if they have been assigned + * one. + */ + for (i = 0; i < qgroup->tqg_cnt; i++) { + task = malloc(sizeof (*task), M_DEVBUF, M_NOWAIT); + TASK_INIT(&task->bt_task, 0, taskqgroup_binder, task); + task->bt_cpuid = qgroup->tqg_queue[i].tgc_cpu; + taskqueue_enqueue(qgroup->tqg_queue[i].tgc_taskq, + &task->bt_task); + } +} + +static int +_taskqgroup_adjust(struct taskqgroup *qgroup, int cnt, int stride) +{ + LIST_HEAD(, grouptask) gtask_head = LIST_HEAD_INITIALIZER(NULL); + cpuset_t mask; + struct grouptask *gtask; + int i, old_cnt, qid; + + mtx_assert(&qgroup->tqg_lock, MA_OWNED); + + if (cnt < 1 || cnt * stride > mp_ncpus || !smp_started) { + printf("taskqgroup_adjust failed cnt: %d stride: %d mp_ncpus: %d smp_started: %d\n", + cnt, stride, mp_ncpus, smp_started); + return (EINVAL); + } + if (qgroup->tqg_adjusting) { + printf("taskqgroup_adjust failed: adjusting\n"); + return (EBUSY); + } + qgroup->tqg_adjusting = 1; + old_cnt = qgroup->tqg_cnt; + mtx_unlock(&qgroup->tqg_lock); + /* + * Set up queue for tasks added before boot. + */ + if (old_cnt == 0) { + LIST_SWAP(>ask_head, &qgroup->tqg_queue[0].tgc_tasks, + grouptask, gt_list); + qgroup->tqg_queue[0].tgc_cnt = 0; + } + + /* + * If new taskq threads have been added. + */ + for (i = old_cnt; i < cnt; i++) + taskqgroup_cpu_create(qgroup, i); + mtx_lock(&qgroup->tqg_lock); + qgroup->tqg_cnt = cnt; + qgroup->tqg_stride = stride; + + /* + * Adjust drivers to use new taskqs. + */ + for (i = 0; i < old_cnt; i++) { + while ((gtask = LIST_FIRST(&qgroup->tqg_queue[i].tgc_tasks))) { + LIST_REMOVE(gtask, gt_list); + qgroup->tqg_queue[i].tgc_cnt--; + LIST_INSERT_HEAD(>ask_head, gtask, gt_list); + } + } + + while ((gtask = LIST_FIRST(>ask_head))) { + LIST_REMOVE(gtask, gt_list); + if (gtask->gt_cpu == -1) + qid = taskqgroup_find(qgroup, gtask->gt_uniq); + else { + for (i = 0; i < qgroup->tqg_cnt; i++) + if (qgroup->tqg_queue[i].tgc_cpu == gtask->gt_cpu) { + qid = i; + break; + } + } + qgroup->tqg_queue[qid].tgc_cnt++; + LIST_INSERT_HEAD(&qgroup->tqg_queue[qid].tgc_tasks, gtask, + gt_list); + gtask->gt_taskqueue = qgroup->tqg_queue[qid].tgc_taskq; + } + /* + * Set new CPU and IRQ affinity + */ + for (i = 0; i < cnt; i++) { + qgroup->tqg_queue[i].tgc_cpu = i * qgroup->tqg_stride; + CPU_ZERO(&mask); + CPU_SET(qgroup->tqg_queue[i].tgc_cpu, &mask); + LIST_FOREACH(gtask, &qgroup->tqg_queue[i].tgc_tasks, gt_list) { + if (gtask->gt_irq == -1) + continue; + intr_setaffinity(gtask->gt_irq, &mask); + } + } + mtx_unlock(&qgroup->tqg_lock); + + /* + * If taskq thread count has been reduced. + */ + for (i = cnt; i < old_cnt; i++) + taskqgroup_cpu_remove(qgroup, i); + + mtx_lock(&qgroup->tqg_lock); + qgroup->tqg_adjusting = 0; + + taskqgroup_bind(qgroup); + + return (0); +} + +int +taskqgroup_adjust(struct taskqgroup *qgroup, int cpu, int stride) +{ + int error; + + mtx_lock(&qgroup->tqg_lock); + error = _taskqgroup_adjust(qgroup, cpu, stride); + mtx_unlock(&qgroup->tqg_lock); + + return (error); +} + +struct taskqgroup * +taskqgroup_create(char *name) +{ + struct taskqgroup *qgroup; + + qgroup = malloc(sizeof(*qgroup), M_TASKQUEUE, M_WAITOK | M_ZERO); + mtx_init(&qgroup->tqg_lock, "taskqgroup", NULL, MTX_DEF); + qgroup->tqg_name = name; + LIST_INIT(&qgroup->tqg_queue[0].tgc_tasks); + + return (qgroup); +} + +void +taskqgroup_destroy(struct taskqgroup *qgroup) +{ + +} Modified: head/sys/net/if.c ============================================================================== --- head/sys/net/if.c Wed May 18 04:04:14 2016 (r300112) +++ head/sys/net/if.c Wed May 18 04:35:58 2016 (r300113) @@ -3900,6 +3900,19 @@ if_multiaddr_count(if_t ifp, int max) return (count); } +int +if_multi_apply(struct ifnet *ifp, int (*filter)(void *, struct ifmultiaddr *, int), void *arg) +{ + struct ifmultiaddr *ifma; + int cnt = 0; + + if_maddr_rlock(ifp); + TAILQ_FOREACH(ifma, &ifp->if_multiaddrs, ifma_link) + cnt += filter(arg, ifma, cnt); + if_maddr_runlock(ifp); + return (cnt); +} + struct mbuf * if_dequeue(if_t ifp) { Modified: head/sys/net/if_var.h ============================================================================== --- head/sys/net/if_var.h Wed May 18 04:04:14 2016 (r300112) +++ head/sys/net/if_var.h Wed May 18 04:35:58 2016 (r300113) @@ -628,6 +628,7 @@ int if_setupmultiaddr(if_t ifp, void *mt int if_multiaddr_array(if_t ifp, void *mta, int *cnt, int max); int if_multiaddr_count(if_t ifp, int max); +int if_multi_apply(struct ifnet *ifp, int (*filter)(void *, struct ifmultiaddr *, int), void *arg); int if_getamcount(if_t ifp); struct ifaddr * if_getifaddr(if_t ifp); Added: head/sys/net/ifdi_if.m ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sys/net/ifdi_if.m Wed May 18 04:35:58 2016 (r300113) @@ -0,0 +1,334 @@ +# +# Copyright (c) 2014, Matthew Macy (kmacy@freebsd.org) +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are met: +# +# 1. Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# +# 2. Neither the name of Matthew Macy nor the names of its +# contributors may be used to endorse or promote products derived from +# this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +# POSSIBILITY OF SUCH DAMAGE. +# +# $FreeBSD$ +# + +#include +#include +#include + +#include +#include + +#include +#include +#include +#include +#include + +INTERFACE ifdi; + +CODE { + + static void + null_void_op(if_ctx_t _ctx __unused) + { + } + + static void + null_timer_op(if_ctx_t _ctx __unused, uint16_t _qsidx __unused) + { + } + + static int + null_int_op(if_ctx_t _ctx __unused) + { + return (0); + } + + static void + null_queue_intr_enable(if_ctx_t _ctx __unused, uint16_t _qid __unused) + { + } + + static void + null_led_func(if_ctx_t _ctx __unused, int _onoff __unused) + { + } + + static void + null_vlan_register_op(if_ctx_t _ctx __unused, uint16_t vtag __unused) + { + } + + static int + null_q_setup(if_ctx_t _ctx __unused, uint32_t _qid __unused) + { + return (0); + } + + static int + null_i2c_req(if_ctx_t _sctx __unused, struct ifi2creq *_i2c __unused) + { + return (ENOTSUP); + } + + static int + null_sysctl_int_delay(if_ctx_t _sctx __unused, if_int_delay_info_t _iidi __unused) + { + return (0); + } + + static int + null_iov_init(if_ctx_t _ctx __unused, uint16_t num_vfs __unused, const nvlist_t *params __unused) + { + return (ENOTSUP); + } + + static int + null_vf_add(if_ctx_t _ctx __unused, uint16_t num_vfs __unused, const nvlist_t *params __unused) + { + return (ENOTSUP); + } + + static int + null_priv_ioctl(if_ctx_t _ctx __unused, u_long command, caddr_t *data __unused) + { + return (ENOTSUP); + } +}; + +# +# bus interfaces +# + +METHOD int attach_pre { + if_ctx_t _ctx; +}; + +METHOD int attach_post { + if_ctx_t _ctx; +}; + +METHOD int detach { + if_ctx_t _ctx; +}; + +METHOD int suspend { + if_ctx_t _ctx; +} DEFAULT null_int_op; + +METHOD int shutdown { + if_ctx_t _ctx; +} DEFAULT null_int_op; + +METHOD int resume { + if_ctx_t _ctx; +} DEFAULT null_int_op; + +# +# downcall to driver to allocate its +# own queue state and tie it to the parent +# + +METHOD int tx_queues_alloc { + if_ctx_t _ctx; + caddr_t *_vaddrs; + uint64_t *_paddrs; + int ntxqs; + int ntxqsets; +}; + +METHOD int rx_queues_alloc { + if_ctx_t _ctx; + caddr_t *_vaddrs; + uint64_t *_paddrs; + int nrxqs; + int nrxqsets; +}; + +METHOD void queues_free { + if_ctx_t _ctx; +}; + +# +# interface reset / stop +# + +METHOD void init { + if_ctx_t _ctx; +}; + +METHOD void stop { + if_ctx_t _ctx; +}; + +# +# interrupt setup and manipulation +# + +METHOD int msix_intr_assign { + if_ctx_t _sctx; + int msix; +}; + +METHOD void intr_enable { + if_ctx_t _ctx; +}; + +METHOD void intr_disable { + if_ctx_t _ctx; +}; + +METHOD void queue_intr_enable { + if_ctx_t _ctx; + uint16_t _qid; +} DEFAULT null_queue_intr_enable; + +METHOD void link_intr_enable { + if_ctx_t _ctx; +} DEFAULT null_void_op; + +# +# interface configuration +# + +METHOD void multi_set { + if_ctx_t _ctx; +}; + +METHOD int mtu_set { + if_ctx_t _ctx; + uint32_t _mtu; +}; + +METHOD void media_set{ + if_ctx_t _ctx; +} DEFAULT null_void_op; + +METHOD int promisc_set { + if_ctx_t _ctx; + int _flags; +}; + +METHOD void crcstrip_set { + if_ctx_t _ctx; + int _onoff; +}; + +# +# IOV handling +# + +METHOD void vflr_handle { + if_ctx_t _ctx; +} DEFAULT null_void_op; + +METHOD int iov_init { + if_ctx_t _ctx; + uint16_t num_vfs; + const nvlist_t * params; +} DEFAULT null_iov_init; + +METHOD void iov_uninit { + if_ctx_t _ctx; +} DEFAULT null_void_op; + +METHOD int iov_vf_add { + if_ctx_t _ctx; + uint16_t num_vfs; + const nvlist_t * params; +} DEFAULT null_vf_add; + + +# +# Device status +# + +METHOD void update_admin_status { + if_ctx_t _ctx; +}; + +METHOD void media_status { + if_ctx_t _ctx; + struct ifmediareq *_ifm; +}; + +METHOD int media_change { + if_ctx_t _ctx; +}; + +METHOD uint64_t get_counter { + if_ctx_t _ctx; + ift_counter cnt; +}; + +METHOD int priv_ioctl { + if_ctx_t _ctx; + u_long _cmd; + caddr_t _data; +} DEFAULT null_priv_ioctl; + +# +# optional methods +# + +METHOD int i2c_req { + if_ctx_t _ctx; + struct ifi2creq *_req; +} DEFAULT null_i2c_req; + +METHOD int txq_setup { + if_ctx_t _ctx; + uint32_t _txqid; +} DEFAULT null_q_setup; + +METHOD int rxq_setup { + if_ctx_t _ctx; + uint32_t _txqid; +} DEFAULT null_q_setup; + +METHOD void timer { + if_ctx_t _ctx; + uint16_t _txqid; +} DEFAULT null_timer_op; + +METHOD void watchdog_reset { + if_ctx_t _ctx; +} DEFAULT null_void_op; + +METHOD void led_func { + if_ctx_t _ctx; + int _onoff; +} DEFAULT null_led_func; + +METHOD void vlan_register { + if_ctx_t _ctx; + uint16_t _vtag; +} DEFAULT null_vlan_register_op; + +METHOD void vlan_unregister { + if_ctx_t _ctx; + uint16_t _vtag; +} DEFAULT null_vlan_register_op; + +METHOD int sysctl_int_delay { + if_ctx_t _sctx; + if_int_delay_info_t _iidi; +} DEFAULT null_sysctl_int_delay; + + Added: head/sys/net/iflib.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sys/net/iflib.c Wed May 18 04:35:58 2016 (r300113) @@ -0,0 +1,4786 @@ +/*- + * Copyright (c) 2014-2016, Matthew Macy + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * + * 2. Neither the name of Matthew Macy nor the names of its + * contributors may be used to endorse or promote products derived from + * this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***