Date: Thu, 7 Aug 2003 10:41:34 -0400 (EDT) From: Daniel Eischen <eischen@vigrid.com> To: Andrew Gallatin <gallatin@cs.duke.edu> Cc: alpha@freebsd.org Subject: Re: Atomic swap Message-ID: <Pine.GSO.4.10.10308071039270.12201-100000@pcnet5.pcnet.com> In-Reply-To: <16178.25370.731486.809755@grasshopper.cs.duke.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 7 Aug 2003, Andrew Gallatin wrote: > > Daniel Eischen writes: > > [ I'm not subscribed to alpha@; please keep me on the CC ] > > > > I need an atomic swap function for libpthread. Here's my hack > > of an implementation: > > > > /* > > * Atomic swap: > > * Atomic (tmp = *dst, *dst = val), then *res = tmp > > * > > * void atomic_swap_long(long *dst, long val, long *res); > > */ > > static __inline > > void atomic_swap_long(volatile long *dst, long val, long *res) > > { > > u_int64_t result; > > > > __asm __volatile ( > > "1:\tldq_l %0,%1\n\t" > > "stq_c %2,%1\n\t" > > "beq %2,2f\n\t" /* Why is this beq instead of bne 1b? */ > > "br 3f\n" > > "2:\tbr 1b\n" > > "3:\n" > > : "=&r" (result) > > : "m" (*dst), "r" (val) > > : "memory"); > > > > *res = result; > > } > > > > As annotated above, there seems to be one more branch than > > necessary. > > Its actually an optimization. Alphas predict that backward branches > will always be taken (think loops). If you were to branch directly > back to 1:, then if the store succeeds (which it nearly always > should), then the cpu would have been betting on taking the branch, > and that would slow things down. OK. > > > Can someone look this over for me? I really don't quite > > know what I'm doing when it comes to inline assembly. > > I think it looks OK, but I'm also terrible at inline asm. Yeah, me too. It took me quite a few tries to hit upon something that seemed to work. -- Dan Eischen
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.GSO.4.10.10308071039270.12201-100000>