From my understanding:
- A read barrier makes sure all reads operations after the barrier happen after read operations before the barrier.
- A write barrier makes sure all write operations after the barrier happen after write operations before the barrier.
It seems to me that memory barriers are just a way to keep the order of reads and writes happening in the right order.
However I've seen text online which imply it replaces what the volatile keyword does. From my understanding,
- Applying volatile to a variable makes the code read it from "main memory" every time it is read with acquire semantics (read barrier?) and written to "main memory" with release semantics (write barrier?).
However, I haven't read anywhere that a read barrier can force the compiler to emit code that reads from "main memory" every time but I see it being implied:
http://www.mjmwired.net/kernel/Document ... armful.txt :
Code: Select all
58 Another situation where one might be tempted to use volatile is
59 when the processor is busy-waiting on the value of a variable. The right
60 way to perform a busy wait is:
61
62 while (my_variable != what_i_want)
63 cpu_relax();
Code: Select all
(c) if you spin on a value [that's] changing, you should use "cpu_relax()" or
"barrier()" anyway, which will force gcc to re-load any values from
memory over the loop.
http://msdn.microsoft.com/en-us/library/f20w0x5e.aspx
Code: Select all
Marking memory with a memory barrier is similar to marking memory with the volatile (C++) keyword. However, a memory barrier is more efficient because reads and writes are forced to complete at specific points in the program rather than globally. The optimizations that can occur if a memory barrier is used cannot occur if the variable is declared volatile.
Would appreciate any clarification on my confused understanding of memory barriers and the volatile keyword.