Date: Thu, 10 Apr 2014 18:58:02 -0400 From: Julio Merino <jmmv@freebsd.org> To: freebsd-ppc@freebsd.org Subject: Fix db48 in powerpc64 Message-ID: <13D61493-60D9-480A-A0ED-DC29C3795C73@freebsd.org>
next in thread | raw e-mail | index | archive | help
Hello, The databases/db48 port does not build under FreeBSD powerpc64 due to a = return type mismatch in an assembly routine. The patch below fixes the *build*, but I have zero clue if it's correct = or not; I'm just following the structure of all other routines. I'm = also pasting below everything related to powerpc in the relevant files, = hoping that the comments give enough context. Could someone that understands powerpc assembly please review this? Thanks! ----- existing contents ----- /********************************************************************* * PowerPC/gcc assembly. *********************************************************************/ #if defined(HAVE_MUTEX_PPC_GCC_ASSEMBLY) typedef u_int32_t tsl_t; #ifdef LOAD_ACTUAL_MUTEX_CODE /* * The PowerPC does a sort of pseudo-atomic locking. You set up a * 'reservation' on a chunk of memory containing a mutex by loading the * mutex value with LWARX. If the mutex has an 'unlocked' (arbitrary) * value, you then try storing into it with STWCX. If no other process = or * thread broke your 'reservation' by modifying the memory containing = the * mutex, then the STCWX succeeds; otherwise it fails and you try to get * a reservation again. * * While mutexes are explicitly 4 bytes, a 'reservation' applies to an * entire cache line, normally 32 bytes, aligned naturally. If the = mutex * lives near data that gets changed a lot, there's a chance that you'll * see more broken reservations than you might otherwise. The only * situation in which this might be a problem is if one processor is * beating on a variable in the same cache block as the mutex while = another * processor tries to acquire the mutex. That's bad news regardless * because of the way it bashes caches, but if you can't guarantee that = a * mutex will reside in a relatively quiescent cache line, you might * consider padding the mutex to force it to live in a cache line by * itself. No, you aren't guaranteed that cache lines are 32 bytes. = Some * embedded processors use 16-byte cache lines, while some 64-bit * processors use 128-bit cache lines. But assuming a 32-byte cache = line * won't get you into trouble for now. * * If mutex locking is a bottleneck, then you can speed it up by adding = a * regular LWZ load before the LWARX load, so that you can test for the * common case of a locked mutex without wasting cycles making a = reservation. * * gcc/ppc: 0 is clear, 1 is set. */ static inline int MUTEX_SET(int *tsl) { int __r; __asm__ volatile ( "0: \n\t" " lwarx %0,0,%1 \n\t" " cmpwi %0,0 \n\t" " bne- 1f \n\t" " stwcx. %1,0,%1 \n\t" " isync \n\t" " beq+ 2f \n\t" " b 0b \n\t" "1: \n\t" " li %1,0 \n\t" "2: \n\t" : "=3D&r" (__r), "+r" (tsl) : : "cr0", "memory"); return (int)tsl; } static inline int MUTEX_UNSET(tsl_t *tsl) { __asm__ volatile("sync" : : : "memory"); return *tsl =3D 0; } #define MUTEX_INIT(tsl) MUTEX_UNSET(tsl) #endif #endif ----- diff ----- Index: files/patch-dbinc_mutex_int.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- files/patch-dbinc_mutex_int.h (revision 0) +++ files/patch-dbinc_mutex_int.h (working copy) @@ -0,0 +1,11 @@ +--- ../dbinc/mutex_int.h.orig 2014-04-04 12:52:15.255739375 -0400 ++++ ../dbinc/mutex_int.h 2014-04-04 12:52:27.454733578 -0400 +@@ -596,7 +596,7 @@ MUTEX_SET(int *tsl) { + : "=3D&r" (__r), "+r" (tsl) + : + : "cr0", "memory"); +- return (int)tsl; ++ return !__r; + } +=20 + static inline int
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?13D61493-60D9-480A-A0ED-DC29C3795C73>