-
The memory ordering in the rwlock was all wrong, I am copying excerpts from the https://en.cppreference.com/w/c/atomic/memory_order#Relaxed_ordering for the convenience of the reader: Relaxed ordering Atomic operations tagged memory_order_relaxed are not synchronization operations; they do not impose an order among concurrent memory accesses. They only guarantee atomicity and modification order consistency. Sequentially-consistent ordering Atomic operations tagged memory_order_seq_cst not only order memory the same way as release/acquire ordering (everything that happened-before a store in one thread becomes a visible side effect in the thread that did a load), but also establish a single total modification order of all atomic operations that are so tagged. Which basically means that we had no or weak synchronization between threads using the same variables in the rwlock structure. There should not be a significant performance drop because the critical sections were already protected by: while(1) { if (relaxed_atomic_operation) { break; } LOCK(lock); if (!relaxed_atomic_operation) { WAIT(sem, lock); } UNLOCK(lock)l } I would add one more thing to "Don't do your own crypto, folks.": - Also don't do your own locking, folks. As part of this commit, I have also cleaned up the #ifdef spaghetti, and fixed the isc_atomic API usage.
f71a8d11