Date: Thu, 7 Aug 2003 10:32:58 -0400 (EDT) From: Andrew Gallatin <gallatin@cs.duke.edu> To: deischen@freebsd.org Cc: alpha@freebsd.org Subject: Re: Atomic swap Message-ID: <16178.25370.731486.809755@grasshopper.cs.duke.edu> In-Reply-To: <Pine.GSO.4.10.10308070941260.2511-100000@pcnet5.pcnet.com> References: <Pine.GSO.4.10.10308070941260.2511-100000@pcnet5.pcnet.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Daniel Eischen writes: > [ I'm not subscribed to alpha@; please keep me on the CC ] > > I need an atomic swap function for libpthread. Here's my hack > of an implementation: > > /* > * Atomic swap: > * Atomic (tmp = *dst, *dst = val), then *res = tmp > * > * void atomic_swap_long(long *dst, long val, long *res); > */ > static __inline > void atomic_swap_long(volatile long *dst, long val, long *res) > { > u_int64_t result; > > __asm __volatile ( > "1:\tldq_l %0,%1\n\t" > "stq_c %2,%1\n\t" > "beq %2,2f\n\t" /* Why is this beq instead of bne 1b? */ > "br 3f\n" > "2:\tbr 1b\n" > "3:\n" > : "=&r" (result) > : "m" (*dst), "r" (val) > : "memory"); > > *res = result; > } > > As annotated above, there seems to be one more branch than > necessary. Its actually an optimization. Alphas predict that backward branches will always be taken (think loops). If you were to branch directly back to 1:, then if the store succeeds (which it nearly always should), then the cpu would have been betting on taking the branch, and that would slow things down. > Can someone look this over for me? I really don't quite > know what I'm doing when it comes to inline assembly. I think it looks OK, but I'm also terrible at inline asm. Drew
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?16178.25370.731486.809755>