From owner-svn-src-all@freebsd.org Sun Jan 27 14:36:54 2019 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 53C3314AE751; Sun, 27 Jan 2019 14:36:54 +0000 (UTC) (envelope-from marius@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E73376C2C9; Sun, 27 Jan 2019 14:36:53 +0000 (UTC) (envelope-from marius@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id D2E1B9587; Sun, 27 Jan 2019 14:36:53 +0000 (UTC) (envelope-from marius@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x0REartJ042860; Sun, 27 Jan 2019 14:36:53 GMT (envelope-from marius@FreeBSD.org) Received: (from marius@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x0REaqtv042854; Sun, 27 Jan 2019 14:36:52 GMT (envelope-from marius@FreeBSD.org) Message-Id: <201901271436.x0REaqtv042854@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: marius set sender to marius@FreeBSD.org using -f From: Marius Strobl Date: Sun, 27 Jan 2019 14:36:52 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r343494 - in stable/11/sys/contrib/ck: include include/gcc/ppc include/gcc/sparcv9 include/spinlock src X-SVN-Group: stable-11 X-SVN-Commit-Author: marius X-SVN-Commit-Paths: in stable/11/sys/contrib/ck: include include/gcc/ppc include/gcc/sparcv9 include/spinlock src X-SVN-Commit-Revision: 343494 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: E73376C2C9 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.95 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.998,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.96)[-0.956,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Jan 2019 14:36:54 -0000 Author: marius Date: Sun Jan 27 14:36:52 2019 New Revision: 343494 URL: https://svnweb.freebsd.org/changeset/base/343494 Log: MFC: r333745, r333764, r337533, r339375, r341041 - ck: add support for executing callbacks outside of main poll loop Pull in change from upstream deca119d14bfffd440770eb67cbdbeaf7b57eb7b - Import CK as of commit deca119d14bfffd440770eb67cbdbeaf7b57eb7b. This is mostly a noop, for mergeinfo purpose, because the relevant changes were committed directly. - Import CK as of commit 08813496570879fbcc2adcdd9ddc0a054361bfde, mostly to avoid using lwsync on ppc32. - Import CK as of commit 5221ae2f3722a78c7fc41e47069ad94983d3bccb. This fixes two problems, one where epoch calls could occur before all the readers had exited the epoch section, and one where the epoch calls could be unnecessarily delayed. - Import CK as of 21d3e319407d19dece16ee317c757ffc54a452bc, which makes its sparcv9 atomics compatible with the FreeBSD kernel by using instructions which access the appropriate address space. Modified: stable/11/sys/contrib/ck/include/ck_epoch.h stable/11/sys/contrib/ck/include/gcc/ppc/ck_pr.h stable/11/sys/contrib/ck/include/gcc/sparcv9/ck_pr.h stable/11/sys/contrib/ck/include/spinlock/hclh.h stable/11/sys/contrib/ck/src/ck_barrier_combining.c stable/11/sys/contrib/ck/src/ck_epoch.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/contrib/ck/include/ck_epoch.h ============================================================================== --- stable/11/sys/contrib/ck/include/ck_epoch.h Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/include/ck_epoch.h Sun Jan 27 14:36:52 2019 (r343494) @@ -266,6 +266,7 @@ void ck_epoch_register(ck_epoch_t *, ck_epoch_record_t void ck_epoch_unregister(ck_epoch_record_t *); bool ck_epoch_poll(ck_epoch_record_t *); +bool ck_epoch_poll_deferred(struct ck_epoch_record *record, ck_stack_t *deferred); void ck_epoch_synchronize(ck_epoch_record_t *); void ck_epoch_synchronize_wait(ck_epoch_t *, ck_epoch_wait_cb_t *, void *); void ck_epoch_barrier(ck_epoch_record_t *); Modified: stable/11/sys/contrib/ck/include/gcc/ppc/ck_pr.h ============================================================================== --- stable/11/sys/contrib/ck/include/gcc/ppc/ck_pr.h Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/include/gcc/ppc/ck_pr.h Sun Jan 27 14:36:52 2019 (r343494) @@ -67,21 +67,29 @@ ck_pr_stall(void) __asm__ __volatile__(I ::: "memory"); \ } -CK_PR_FENCE(atomic, "lwsync") -CK_PR_FENCE(atomic_store, "lwsync") +#ifdef CK_MD_PPC32_LWSYNC +#define CK_PR_LWSYNCOP "lwsync" +#else /* CK_MD_PPC32_LWSYNC_DISABLE */ +#define CK_PR_LWSYNCOP "sync" +#endif + +CK_PR_FENCE(atomic, CK_PR_LWSYNCOP) +CK_PR_FENCE(atomic_store, CK_PR_LWSYNCOP) CK_PR_FENCE(atomic_load, "sync") -CK_PR_FENCE(store_atomic, "lwsync") -CK_PR_FENCE(load_atomic, "lwsync") -CK_PR_FENCE(store, "lwsync") +CK_PR_FENCE(store_atomic, CK_PR_LWSYNCOP) +CK_PR_FENCE(load_atomic, CK_PR_LWSYNCOP) +CK_PR_FENCE(store, CK_PR_LWSYNCOP) CK_PR_FENCE(store_load, "sync") -CK_PR_FENCE(load, "lwsync") -CK_PR_FENCE(load_store, "lwsync") +CK_PR_FENCE(load, CK_PR_LWSYNCOP) +CK_PR_FENCE(load_store, CK_PR_LWSYNCOP) CK_PR_FENCE(memory, "sync") -CK_PR_FENCE(acquire, "lwsync") -CK_PR_FENCE(release, "lwsync") -CK_PR_FENCE(acqrel, "lwsync") -CK_PR_FENCE(lock, "lwsync") -CK_PR_FENCE(unlock, "lwsync") +CK_PR_FENCE(acquire, CK_PR_LWSYNCOP) +CK_PR_FENCE(release, CK_PR_LWSYNCOP) +CK_PR_FENCE(acqrel, CK_PR_LWSYNCOP) +CK_PR_FENCE(lock, CK_PR_LWSYNCOP) +CK_PR_FENCE(unlock, CK_PR_LWSYNCOP) + +#undef CK_PR_LWSYNCOP #undef CK_PR_FENCE Modified: stable/11/sys/contrib/ck/include/gcc/sparcv9/ck_pr.h ============================================================================== --- stable/11/sys/contrib/ck/include/gcc/sparcv9/ck_pr.h Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/include/gcc/sparcv9/ck_pr.h Sun Jan 27 14:36:52 2019 (r343494) @@ -136,11 +136,26 @@ CK_PR_STORE_S(int, int, "stsw") #undef CK_PR_STORE_S #undef CK_PR_STORE +/* Use the appropriate address space for atomics within the FreeBSD kernel. */ +#if defined(__FreeBSD__) && defined(_KERNEL) +#include +#include +#define CK_PR_INS_CAS "casa" +#define CK_PR_INS_CASX "casxa" +#define CK_PR_INS_SWAP "swapa" +#define CK_PR_ASI_ATOMIC __XSTRING(__ASI_ATOMIC) +#else +#define CK_PR_INS_CAS "cas" +#define CK_PR_INS_CASX "casx" +#define CK_PR_INS_SWAP "swap" +#define CK_PR_ASI_ATOMIC "" +#endif + CK_CC_INLINE static bool ck_pr_cas_64_value(uint64_t *target, uint64_t compare, uint64_t set, uint64_t *value) { - __asm__ __volatile__("casx [%1], %2, %0" + __asm__ __volatile__(CK_PR_INS_CASX " [%1] " CK_PR_ASI_ATOMIC ", %2, %0" : "+&r" (set) : "r" (target), "r" (compare) @@ -154,7 +169,7 @@ CK_CC_INLINE static bool ck_pr_cas_64(uint64_t *target, uint64_t compare, uint64_t set) { - __asm__ __volatile__("casx [%1], %2, %0" + __asm__ __volatile__(CK_PR_INS_CASX " [%1] " CK_PR_ASI_ATOMIC ", %2, %0" : "+&r" (set) : "r" (target), "r" (compare) @@ -181,7 +196,7 @@ ck_pr_cas_ptr_value(void *target, void *compare, void CK_CC_INLINE static bool \ ck_pr_cas_##N##_value(T *target, T compare, T set, T *value) \ { \ - __asm__ __volatile__("cas [%1], %2, %0" \ + __asm__ __volatile__(CK_PR_INS_CAS " [%1] " CK_PR_ASI_ATOMIC ", %2, %0" \ : "+&r" (set) \ : "r" (target), \ "r" (compare) \ @@ -192,7 +207,7 @@ ck_pr_cas_ptr_value(void *target, void *compare, void CK_CC_INLINE static bool \ ck_pr_cas_##N(T *target, T compare, T set) \ { \ - __asm__ __volatile__("cas [%1], %2, %0" \ + __asm__ __volatile__(CK_PR_INS_CAS " [%1] " CK_PR_ASI_ATOMIC ", %2, %0" \ : "+&r" (set) \ : "r" (target), \ "r" (compare) \ @@ -211,7 +226,7 @@ CK_PR_CAS(int, int) ck_pr_fas_##N(T *target, T update) \ { \ \ - __asm__ __volatile__("swap [%1], %0" \ + __asm__ __volatile__(CK_PR_INS_SWAP " [%1] " CK_PR_ASI_ATOMIC ", %0" \ : "+&r" (update) \ : "r" (target) \ : "memory"); \ @@ -223,6 +238,11 @@ CK_PR_FAS(uint, unsigned int) CK_PR_FAS(32, uint32_t) #undef CK_PR_FAS + +#undef CK_PR_INS_CAS +#undef CK_PR_INS_CASX +#undef CK_PR_INS_SWAP +#undef CK_PR_ASI_ATOMIC #endif /* CK_PR_SPARCV9_H */ Modified: stable/11/sys/contrib/ck/include/spinlock/hclh.h ============================================================================== --- stable/11/sys/contrib/ck/include/spinlock/hclh.h Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/include/spinlock/hclh.h Sun Jan 27 14:36:52 2019 (r343494) @@ -81,6 +81,8 @@ ck_spinlock_hclh_lock(struct ck_spinlock_hclh **glob_q thread->wait = true; thread->splice = false; thread->cluster_id = (*local_queue)->cluster_id; + /* Make sure previous->previous doesn't appear to be NULL */ + thread->previous = *local_queue; /* Serialize with respect to update of local queue. */ ck_pr_fence_store_atomic(); @@ -91,13 +93,15 @@ ck_spinlock_hclh_lock(struct ck_spinlock_hclh **glob_q /* Wait until previous thread from the local queue is done with lock. */ ck_pr_fence_load(); - if (previous->previous != NULL && - previous->cluster_id == thread->cluster_id) { - while (ck_pr_load_uint(&previous->wait) == true) + if (previous->previous != NULL) { + while (ck_pr_load_uint(&previous->wait) == true && + ck_pr_load_int(&previous->cluster_id) == thread->cluster_id && + ck_pr_load_uint(&previous->splice) == false) ck_pr_stall(); /* We're head of the global queue, we're done */ - if (ck_pr_load_uint(&previous->splice) == false) + if (ck_pr_load_int(&previous->cluster_id) == thread->cluster_id && + ck_pr_load_uint(&previous->splice) == false) return; } Modified: stable/11/sys/contrib/ck/src/ck_barrier_combining.c ============================================================================== --- stable/11/sys/contrib/ck/src/ck_barrier_combining.c Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/src/ck_barrier_combining.c Sun Jan 27 14:36:52 2019 (r343494) @@ -35,7 +35,7 @@ struct ck_barrier_combining_queue { struct ck_barrier_combining_group *tail; }; -CK_CC_INLINE static struct ck_barrier_combining_group * +static struct ck_barrier_combining_group * ck_barrier_combining_queue_dequeue(struct ck_barrier_combining_queue *queue) { struct ck_barrier_combining_group *front = NULL; @@ -48,7 +48,7 @@ ck_barrier_combining_queue_dequeue(struct ck_barrier_c return front; } -CK_CC_INLINE static void +static void ck_barrier_combining_insert(struct ck_barrier_combining_group *parent, struct ck_barrier_combining_group *tnode, struct ck_barrier_combining_group **child) @@ -72,7 +72,7 @@ ck_barrier_combining_insert(struct ck_barrier_combinin * into the barrier's tree. We use a queue to implement this * traversal. */ -CK_CC_INLINE static void +static void ck_barrier_combining_queue_enqueue(struct ck_barrier_combining_queue *queue, struct ck_barrier_combining_group *node_value) { @@ -185,10 +185,10 @@ ck_barrier_combining_aux(struct ck_barrier_combining * ck_pr_fence_store(); ck_pr_store_uint(&tnode->sense, ~tnode->sense); } else { - ck_pr_fence_memory(); while (sense != ck_pr_load_uint(&tnode->sense)) ck_pr_stall(); } + ck_pr_fence_memory(); return; } Modified: stable/11/sys/contrib/ck/src/ck_epoch.c ============================================================================== --- stable/11/sys/contrib/ck/src/ck_epoch.c Sun Jan 27 14:27:53 2019 (r343493) +++ stable/11/sys/contrib/ck/src/ck_epoch.c Sun Jan 27 14:36:52 2019 (r343494) @@ -127,6 +127,14 @@ */ #define CK_EPOCH_GRACE 3U +/* + * CK_EPOCH_LENGTH must be a power-of-2 (because (CK_EPOCH_LENGTH - 1) is used + * as a mask, and it must be at least 3 (see comments above). + */ +#if (CK_EPOCH_LENGTH < 3 || (CK_EPOCH_LENGTH & (CK_EPOCH_LENGTH - 1)) != 0) +#error "CK_EPOCH_LENGTH must be a power of 2 and >= 3" +#endif + enum { CK_EPOCH_STATE_USED = 0, CK_EPOCH_STATE_FREE = 1 @@ -348,8 +356,8 @@ ck_epoch_scan(struct ck_epoch *global, return NULL; } -static void -ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e) +static unsigned int +ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e, ck_stack_t *deferred) { unsigned int epoch = e & (CK_EPOCH_LENGTH - 1); ck_stack_entry_t *head, *next, *cursor; @@ -362,7 +370,11 @@ ck_epoch_dispatch(struct ck_epoch_record *record, unsi ck_epoch_entry_container(cursor); next = CK_STACK_NEXT(cursor); - entry->function(entry); + if (deferred != NULL) + ck_stack_push_spnc(deferred, &entry->stack_entry); + else + entry->function(entry); + i++; } @@ -378,7 +390,7 @@ ck_epoch_dispatch(struct ck_epoch_record *record, unsi ck_pr_sub_uint(&record->n_pending, i); } - return; + return i; } /* @@ -390,7 +402,7 @@ ck_epoch_reclaim(struct ck_epoch_record *record) unsigned int epoch; for (epoch = 0; epoch < CK_EPOCH_LENGTH; epoch++) - ck_epoch_dispatch(record, epoch); + ck_epoch_dispatch(record, epoch, NULL); return; } @@ -551,35 +563,61 @@ ck_epoch_barrier_wait(struct ck_epoch_record *record, * is far from ideal too. */ bool -ck_epoch_poll(struct ck_epoch_record *record) +ck_epoch_poll_deferred(struct ck_epoch_record *record, ck_stack_t *deferred) { bool active; unsigned int epoch; struct ck_epoch_record *cr = NULL; struct ck_epoch *global = record->global; + unsigned int n_dispatch; epoch = ck_pr_load_uint(&global->epoch); /* Serialize epoch snapshots with respect to global epoch. */ ck_pr_fence_memory(); + + /* + * At this point, epoch is the current global epoch value. + * There may or may not be active threads which observed epoch - 1. + * (ck_epoch_scan() will tell us that). However, there should be + * no active threads which observed epoch - 2. + * + * Note that checking epoch - 2 is necessary, as race conditions can + * allow another thread to increment the global epoch before this + * thread runs. + */ + n_dispatch = ck_epoch_dispatch(record, epoch - 2, deferred); + cr = ck_epoch_scan(global, cr, epoch, &active); - if (cr != NULL) { - record->epoch = epoch; - return false; - } + if (cr != NULL) + return (n_dispatch > 0); /* We are at a grace period if all threads are inactive. */ if (active == false) { record->epoch = epoch; for (epoch = 0; epoch < CK_EPOCH_LENGTH; epoch++) - ck_epoch_dispatch(record, epoch); + ck_epoch_dispatch(record, epoch, deferred); return true; } - /* If an active thread exists, rely on epoch observation. */ + /* + * If an active thread exists, rely on epoch observation. + * + * All the active threads entered the epoch section during + * the current epoch. Therefore, we can now run the handlers + * for the immediately preceding epoch and attempt to + * advance the epoch if it hasn't been already. + */ (void)ck_pr_cas_uint(&global->epoch, epoch, epoch + 1); - ck_epoch_dispatch(record, epoch + 1); + ck_epoch_dispatch(record, epoch - 1, deferred); return true; +} + +bool +ck_epoch_poll(struct ck_epoch_record *record) +{ + + return ck_epoch_poll_deferred(record, NULL); }