namespace std { typedef enum memory_order { memory_order_relaxed, memory_order_consume, memory_order_acquire, memory_order_release, memory_order_acq_rel, memory_order_seq_cst } memory_order; }

The enumeration memory_order specifies the detailed regular (non-atomic) memory synchronization order as defined in [intro.multithread] and may provide for operation ordering. Its enumerated values and their meanings are as follows:

memory_order_relaxed: no operation orders memory.

memory_order_release, memory_order_acq_rel, and memory_order_seq_cst: a store operation performs a release operation on the affected memory location.

memory_order_consume: a load operation performs a consume operation on the affected memory location.

memory_order_acquire, memory_order_acq_rel, and memory_order_seq_cst: a load operation performs an acquire operation on the affected memory location.

[ *Note:* Atomic operations specifying memory_order_relaxed are relaxed
with respect to memory ordering. Implementations must still guarantee that any
given atomic access to a particular atomic object be indivisible with respect
to all other atomic accesses to that object. * — end note* ]

An atomic operation *A* that performs a release operation on an atomic
object *M* synchronizes with an atomic operation *B* that performs
an acquire operation on *M* and takes its value from any side effect in the
release sequence headed by *A*.

There shall be a single total order *S* on all memory_order_seq_cst
operations, consistent with the “happens before” order and modification orders for all
affected locations, such that each memory_order_seq_cst operation
*B* that loads a
value from an atomic object *M*
observes one of the following values:

the result of the last modification

*A*of*M*that precedes*B*in*S*, if it exists, orif

*A*exists, the result of some modification of*M*in the visible sequence of side effects with respect to*B*that is not memory_order_seq_cst and that does not happen before*A*, orif

*A*does not exist, the result of some modification of*M*in the visible sequence of side effects with respect to*B*that is not memory_order_seq_cst.

[ *Note:* Although it is not explicitly required that *S* include locks, it can
always be extended to an order that does include lock and unlock operations, since the
ordering between those is already included in the “happens before” ordering. * — end note* ]

For an atomic operation *B* that reads the value of an atomic object *M*,
if there is a memory_order_seq_cst fence *X* sequenced before *B*,
then *B* observes either the last memory_order_seq_cst modification of
*M* preceding *X* in the total order *S* or a later modification of
*M* in its modification order.

For atomic operations *A* and *B* on an atomic object *M*, where
*A* modifies *M* and *B* takes its value, if there is a
memory_order_seq_cst fence *X* such that *A* is sequenced before
*X* and *B* follows *X* in *S*, then *B* observes
either the effects of *A* or a later modification of *M* in its
modification order.

For atomic operations *A* and *B* on an atomic object *M*, where
*A* modifies *M* and *B* takes its value, if there are
memory_order_seq_cst fences *X* and *Y* such that *A* is
sequenced before *X*, *Y* is sequenced before *B*, and *X*
precedes *Y* in *S*, then *B* observes either the effects of
*A* or a later modification of *M* in its modification order.

For atomic operations *A* and *B* on an atomic object *M*, if there
are memory_order_seq_cst fences X and Y such that *A* is
sequenced before *X*, *Y* is sequenced before *B*, and *X*
precedes *Y* in *S*, then *B* occurs later than *A* in the
modification order of *M*.

[ *Note:* memory_order_seq_cst ensures sequential consistency only for a
program that is free of data races and uses exclusively memory_order_seq_cst
operations. Any use of weaker ordering will invalidate this guarantee unless extreme
care is used. In particular, memory_order_seq_cst fences ensure a total order
only for the fences themselves. Fences cannot, in general, be used to restore sequential
consistency for atomic operations with weaker ordering specifications. * — end note* ]

An atomic store shall only store a value that has been computed from constants and program input values by a finite sequence of program evaluations, such that each evaluation observes the values of variables as computed by the last prior assignment in the sequence. The ordering of evaluations in this sequence shall be such that:

if an evaluation

*B*observes a value computed by*A*in a different thread, then*B*does not happen before*A*, andif an evaluation

*A*is included in the sequence, then every evaluation that assigns to the same variable and happens before*A*is included.

[ *Note:* The second requirement disallows “out-of-thin-air” or “speculative” stores of atomics when relaxed atomics are used. Since unordered operations are involved, evaluations may appear in this sequence out of thread order. For example, with x and y initially zero,

```
// Thread 1:
r1 = y.load(memory_order_relaxed);
x.store(r1, memory_order_relaxed);
```

```
// Thread 2:
r2 = x.load(memory_order_relaxed);
y.store(42, memory_order_relaxed);
```

is allowed to produce r1 = r2 = 42. The sequence of evaluations justifying this consists of:

y.store(42, memory_order_relaxed); r1 = y.load(memory_order_relaxed); x.store(r1, memory_order_relaxed); r2 = x.load(memory_order_relaxed);

On the other hand,

```
// Thread 1:
r1 = y.load(memory_order_relaxed);
x.store(r1, memory_order_relaxed);
```

```
// Thread 2:
r2 = x.load(memory_order_relaxed);
y.store(r2, memory_order_relaxed);
```

may not produce r1 = r2 = 42, since there is no sequence of evaluations that
results in the computation of 42. In the absence of “relaxed” operations and
read-modify-write operations with weaker than memory_order_acq_rel ordering, the
second requirement has no impact.* — end note* ]

[ *Note:* The requirements do allow r1 == r2 == 42 in the following example,
with x and y initially zero:

```
// Thread 1:
r1 = x.load(memory_order_relaxed);
if (r1 == 42) y.store(r1, memory_order_relaxed);
```

```
// Thread 2:
r2 = y.load(memory_order_relaxed);
if (r2 == 42) x.store(42, memory_order_relaxed);
```

However, implementations should not allow such behavior.* — end note* ]

Atomic read-modify-write operations shall always read the last value (in the modification order) written before the write associated with the read-modify-write operation.

Implementations should make atomic stores visible to atomic loads within a reasonable amount of time.

*Effects:* The argument does not carry a dependency to the return
value ([intro.multithread]).

*Returns:* y.