29 Atomic operations library [atomics]

29.3 Order and consistency [atomics.order]

namespace std {
  typedef enum memory_order {
    memory_order_relaxed, memory_order_consume, memory_order_acquire, 
    memory_order_release, memory_order_acq_rel, memory_order_seq_cst
  } memory_order;
}

The enumeration memory_order specifies the detailed regular (non-atomic) memory synchronization order as defined in [intro.multithread] and may provide for operation ordering. Its enumerated values and their meanings are as follows:

  • memory_order_relaxed: no operation orders memory.

  • memory_order_release, memory_order_acq_rel, and memory_order_seq_cst: a store operation performs a release operation on the affected memory location.

  • memory_order_consume: a load operation performs a consume operation on the affected memory location.

  • memory_order_acquire, memory_order_acq_rel, and memory_order_seq_cst: a load operation performs an acquire operation on the affected memory location.

Note: Atomic operations specifying memory_order_relaxed are relaxed with respect to memory ordering. Implementations must still guarantee that any given atomic access to a particular atomic object be indivisible with respect to all other atomic accesses to that object.  — end note ]

An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A.

There shall be a single total order S on all memory_order_seq_cst operations, consistent with the “happens before” order and modification orders for all affected locations, such that each memory_order_seq_cst operation B that loads a value from an atomic object M observes one of the following values:

  • the result of the last modification A of M that precedes B in S, if it exists, or

  • if A exists, the result of some modification of M that is not memory_order_seq_cst and that does not happen before A, or

  • if A does not exist, the result of some modification of M that is not memory_order_seq_cst.

Note: Although it is not explicitly required that S include locks, it can always be extended to an order that does include lock and unlock operations, since the ordering between those is already included in the “happens before” ordering.  — end note ]

For an atomic operation B that reads the value of an atomic object M, if there is a memory_order_seq_cst fence X sequenced before B, then B observes either the last memory_order_seq_cst modification of M preceding X in the total order S or a later modification of M in its modification order.

For atomic operations A and B on an atomic object M, where A modifies M and B takes its value, if there is a memory_order_seq_cst fence X such that A is sequenced before X and B follows X in S, then B observes either the effects of A or a later modification of M in its modification order.

For atomic operations A and B on an atomic object M, where A modifies M and B takes its value, if there are memory_order_seq_cst fences X and Y such that A is sequenced before X, Y is sequenced before B, and X precedes Y in S, then B observes either the effects of A or a later modification of M in its modification order.

For atomic modifications A and B of an atomic object M, B occurs later than A in the modification order of M if:

  • there is a memory_order_seq_cst fence X such that A is sequenced before X, and X precedes B in S, or

  • there is a memory_order_seq_cst fence Y such that Y is sequenced before B, and A precedes Y in S, or

  • there are memory_order_seq_cst fences X and Y such that A is sequenced before X, Y is sequenced before B, and X precedes Y in S.

Note: memory_order_seq_cst ensures sequential consistency only for a program that is free of data races and uses exclusively memory_order_seq_cst operations. Any use of weaker ordering will invalidate this guarantee unless extreme care is used. In particular, memory_order_seq_cst fences ensure a total order only for the fences themselves. Fences cannot, in general, be used to restore sequential consistency for atomic operations with weaker ordering specifications.  — end note ]

Implementations should ensure that no “out-of-thin-air” values are computed that circularly depend on their own computation.

Note: For example, with x and y initially zero,

// Thread 1:
r1 = y.load(memory_order_relaxed);
x.store(r1, memory_order_relaxed);
// Thread 2:
r2 = x.load(memory_order_relaxed);
y.store(r2, memory_order_relaxed);

should not produce r1 == r2 == 42, since the store of 42 to y is only possible if the store to x stores 42, which circularly depends on the store to y storing 42. Note that without this restriction, such an execution is possible.  — end note ]

Note: The recommendation similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:

// Thread 1:
r1 = x.load(memory_order_relaxed);
if (r1 == 42) y.store(42, memory_order_relaxed);
// Thread 2:
r2 = y.load(memory_order_relaxed);
if (r2 == 42) x.store(42, memory_order_relaxed);

 — end note ]

Atomic read-modify-write operations shall always read the last value (in the modification order) written before the write associated with the read-modify-write operation.

Implementations should make atomic stores visible to atomic loads within a reasonable amount of time.

template <class T> T kill_dependency(T y) noexcept;

Effects: The argument does not carry a dependency to the return value ([intro.multithread]).

Returns: y.