namespace std { template <class T> struct atomic { using value_type = T; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; void store(T, memory_order = memory_order_seq_cst) volatile noexcept; void store(T, memory_order = memory_order_seq_cst) noexcept; T load(memory_order = memory_order_seq_cst) const volatile noexcept; T load(memory_order = memory_order_seq_cst) const noexcept; operator T() const volatile noexcept; operator T() const noexcept; T exchange(T, memory_order = memory_order_seq_cst) volatile noexcept; T exchange(T, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept; bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept; bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_weak(T&, T, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_weak(T&, T, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_strong(T&, T, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_strong(T&, T, memory_order = memory_order_seq_cst) noexcept; atomic() noexcept = default; constexpr atomic(T) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; T operator=(T) volatile noexcept; T operator=(T) noexcept; }; }
The template argument for T shall be trivially copyable ([basic.types]). [ Note: Type arguments that are not also statically initializable may be difficult to use. — end note ]
[ Note: The representation of an atomic specialization need not have the same size as its corresponding argument type. Specializations should have the same size whenever possible, as this reduces the effort required to port existing code. — end note ]
[ Note: Many operations are volatile-qualified. The “volatile as device register” semantics have not changed in the standard. This qualification means that volatility is preserved when applying these operations to volatile objects. It does not mean that operations on non-volatile objects become volatile. — end note ]
atomic() noexcept = default;
Effects: Leaves the atomic object in an uninitialized state. [ Note: These semantics ensure compatibility with C. — end note ]
constexpr atomic(T desired) noexcept;
Effects: Initializes the object with the value desired. Initialization is not an atomic operation ([intro.multithread]). [ Note: It is possible to have an access to an atomic object A race with its construction, for example by communicating the address of the just-constructed object A to another thread via memory_order_relaxed operations on a suitable atomic pointer variable, and then immediately accessing A in the receiving thread. This results in undefined behavior. — end note ]
#define ATOMIC_VAR_INIT(value) see below
The macro expands to a token sequence suitable for constant initialization of an atomic variable of static storage duration of a type that is initialization-compatible with value. [ Note: This operation may need to initialize locks. — end note ] Concurrent access to the variable being initialized, even via an atomic operation, constitutes a data race. [ Example:
atomic<int> v = ATOMIC_VAR_INIT(5);
— end example ]
static constexpr bool is_always_lock_free = implementation-defined;
The static data member is_always_lock_free is true if the atomic type's operations are always lock-free, and false otherwise. [ Note: The value of is_always_lock_free is consistent with the value of the corresponding ATOMIC_..._LOCK_FREE macro, if defined. — end note ]
bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
Returns: true if the object's operations are lock-free, false otherwise. [ Note: The return value of the is_lock_free member function is consistent with the value of is_always_lock_free for the same type. — end note ]
void store(T desired, memory_order order = memory_order_seq_cst) volatile noexcept;
void store(T desired, memory_order order = memory_order_seq_cst) noexcept;
Requires: The order argument shall not be memory_order_consume, memory_order_acquire, nor memory_order_acq_rel.
Effects: Atomically replaces the value pointed to by this with the value of desired. Memory is affected according to the value of order.
T operator=(T desired) volatile noexcept;
T operator=(T desired) noexcept;
T load(memory_order order = memory_order_seq_cst) const volatile noexcept;
T load(memory_order order = memory_order_seq_cst) const noexcept;
operator T() const volatile noexcept;
operator T() const noexcept;
T exchange(T desired, memory_order order = memory_order_seq_cst) volatile noexcept;
T exchange(T desired, memory_order order = memory_order_seq_cst) noexcept;
Effects: Atomically replaces the value pointed to by this with desired. Memory is affected according to the value of order. These operations are atomic read-modify-write operations ([intro.multithread]).
bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order_seq_cst) volatile noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order_seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order_seq_cst) volatile noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order_seq_cst) noexcept;
Effects: Retrieves the value in expected. It then atomically compares the contents of the memory pointed to by this for equality with that previously retrieved from expected, and if true, replaces the contents of the memory pointed to by this with that in desired. If and only if the comparison is true, memory is affected according to the value of success, and if the comparison is false, memory is affected according to the value of failure. When only one memory_order argument is supplied, the value of success is order, and the value of failure is order except that a value of memory_order_acq_rel shall be replaced by the value memory_order_acquire and a value of memory_order_release shall be replaced by the value memory_order_relaxed. If and only if the comparison is false then, after the atomic operation, the contents of the memory in expected are replaced by the value read from the memory pointed to by this during the atomic comparison. If the operation returns true, these operations are atomic read-modify-write operations ([intro.multithread]) on the memory pointed to by this. Otherwise, these operations are atomic load operations on that memory.
[ Note: For example, the effect of compare_exchange_strong is
if (memcmp(this, &expected, sizeof(*this)) == 0) memcpy(this, &desired, sizeof(*this)); else memcpy(expected, this, sizeof(*this));
— end note ] [ Example: The expected use of the compare-and-exchange operations is as follows. The compare-and-exchange operations will update expected when another iteration of the loop is needed.
expected = current.load(); do { desired = function(expected); } while (!current.compare_exchange_weak(expected, desired));
— end example ] [ Example: Because the expected value is updated only on failure, code releasing the memory containing the expected value on success will work. E.g. list head insertion will act atomically and would not introduce a data race in the following code:
do { p->next = head; // make new list node point to the current head } while (!head.compare_exchange_weak(p->next, p)); // try to insert
— end example ]
Implementations should ensure that weak compare-and-exchange operations do not consistently return false unless either the atomic object has value different from expected or there are concurrent modifications to the atomic object.
Remarks: A weak compare-and-exchange operation may fail spuriously. That is, even when the contents of memory referred to by expected and this are equal, it may return false and store back to expected the same memory contents that were originally there. [ Note: This spurious failure enables implementation of compare-and-exchange on a broader class of machines, e.g., load-locked store-conditional machines. A consequence of spurious failure is that nearly all uses of weak compare-and-exchange will be in a loop.
When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. — end note ]
[ Note: The memcpy and memcmp semantics of the compare-and-exchange operations may result in failed comparisons for values that compare equal with operator== if the underlying type has padding bits, trap bits, or alternate representations of the same value. Thus, compare_exchange_strong should be used with extreme care. On the other hand, compare_exchange_weak should converge rapidly. — end note ]
There are specializations of the atomic template for the integral types char, signed char, unsigned char, short, unsigned short, int, unsigned int, long, unsigned long, long long, unsigned long long, char16_t, char32_t, wchar_t, and any other types needed by the typedefs in the header <cstdint>. For each such integral type integral, the specialization atomic<integral> provides additional atomic operations appropriate to integral types. [ Note: For the specialization atomic<bool>, see [atomics.types.generic]. — end note ]
namespace std { template <> struct atomic<integral> { using value_type = integral; using difference_type = value_type; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; void store(integral, memory_order = memory_order_seq_cst) volatile noexcept; void store(integral, memory_order = memory_order_seq_cst) noexcept; integral load(memory_order = memory_order_seq_cst) const volatile noexcept; integral load(memory_order = memory_order_seq_cst) const noexcept; operator integral() const volatile noexcept; operator integral() const noexcept; integral exchange(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral exchange(integral, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_weak(integral&, integral, memory_order, memory_order) volatile noexcept; bool compare_exchange_weak(integral&, integral, memory_order, memory_order) noexcept; bool compare_exchange_strong(integral&, integral, memory_order, memory_order) volatile noexcept; bool compare_exchange_strong(integral&, integral, memory_order, memory_order) noexcept; bool compare_exchange_weak(integral&, integral, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_weak(integral&, integral, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_strong(integral&, integral, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_strong(integral&, integral, memory_order = memory_order_seq_cst) noexcept; integral fetch_add(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral fetch_add(integral, memory_order = memory_order_seq_cst) noexcept; integral fetch_sub(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral fetch_sub(integral, memory_order = memory_order_seq_cst) noexcept; integral fetch_and(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral fetch_and(integral, memory_order = memory_order_seq_cst) noexcept; integral fetch_or(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral fetch_or(integral, memory_order = memory_order_seq_cst) noexcept; integral fetch_xor(integral, memory_order = memory_order_seq_cst) volatile noexcept; integral fetch_xor(integral, memory_order = memory_order_seq_cst) noexcept; atomic() noexcept = default; constexpr atomic(integral) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; integral operator=(integral) volatile noexcept; integral operator=(integral) noexcept; integral operator++(int) volatile noexcept; integral operator++(int) noexcept; integral operator--(int) volatile noexcept; integral operator--(int) noexcept; integral operator++() volatile noexcept; integral operator++() noexcept; integral operator--() volatile noexcept; integral operator--() noexcept; integral operator+=(integral) volatile noexcept; integral operator+=(integral) noexcept; integral operator-=(integral) volatile noexcept; integral operator-=(integral) noexcept; integral operator&=(integral) volatile noexcept; integral operator&=(integral) noexcept; integral operator|=(integral) volatile noexcept; integral operator|=(integral) noexcept; integral operator^=(integral) volatile noexcept; integral operator^=(integral) noexcept; }; }
The atomic integral specializations are standard-layout structs. They each have a trivial default constructor and a trivial destructor.
The following operations perform arithmetic computations. The key, operator, and computation correspondence is:
key | Op | Computation | key | Op | Computation |
add | + | addition | sub | - | subtraction |
or | | | bitwise inclusive or | xor | ^ | bitwise exclusive or |
and | & | bitwise and |
T fetch_key(T operand, memory_order order = memory_order_seq_cst) volatile noexcept;
T fetch_key(T operand, memory_order order = memory_order_seq_cst) noexcept;
Effects: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations ([intro.multithread]).
T operator op=(T operand) volatile noexcept;
T operator op=(T operand) noexcept;
namespace std { template <class T> struct atomic<T*> { using value_type = T*; using difference_type = ptrdiff_t; static constexpr bool is_always_lock_free = implementation-defined; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; void store(T*, memory_order = memory_order_seq_cst) volatile noexcept; void store(T*, memory_order = memory_order_seq_cst) noexcept; T* load(memory_order = memory_order_seq_cst) const volatile noexcept; T* load(memory_order = memory_order_seq_cst) const noexcept; operator T*() const volatile noexcept; operator T*() const noexcept; T* exchange(T*, memory_order = memory_order_seq_cst) volatile noexcept; T* exchange(T*, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept; bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept; bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_weak(T*&, T*, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_weak(T*&, T*, memory_order = memory_order_seq_cst) noexcept; bool compare_exchange_strong(T*&, T*, memory_order = memory_order_seq_cst) volatile noexcept; bool compare_exchange_strong(T*&, T*, memory_order = memory_order_seq_cst) noexcept; T* fetch_add(ptrdiff_t, memory_order = memory_order_seq_cst) volatile noexcept; T* fetch_add(ptrdiff_t, memory_order = memory_order_seq_cst) noexcept; T* fetch_sub(ptrdiff_t, memory_order = memory_order_seq_cst) volatile noexcept; T* fetch_sub(ptrdiff_t, memory_order = memory_order_seq_cst) noexcept; atomic() noexcept = default; constexpr atomic(T*) noexcept; atomic(const atomic&) = delete; atomic& operator=(const atomic&) = delete; atomic& operator=(const atomic&) volatile = delete; T* operator=(T*) volatile noexcept; T* operator=(T*) noexcept; T* operator++(int) volatile noexcept; T* operator++(int) noexcept; T* operator--(int) volatile noexcept; T* operator--(int) noexcept; T* operator++() volatile noexcept; T* operator++() noexcept; T* operator--() volatile noexcept; T* operator--() noexcept; T* operator+=(ptrdiff_t) volatile noexcept; T* operator+=(ptrdiff_t) noexcept; T* operator-=(ptrdiff_t) volatile noexcept; T* operator-=(ptrdiff_t) noexcept; }; }
There is a partial specialization of the atomic class template for pointers. Specializations of this partial specialization are standard-layout structs. They each have a trivial default constructor and a trivial destructor.
The following operations perform pointer arithmetic. The key, operator, and computation correspondence is:
Key | Op | Computation | Key | Op | Computation |
add | + | addition | sub | - | subtraction |
T* fetch_key(ptrdiff_t operand, memory_order order = memory_order_seq_cst) volatile noexcept;
T* fetch_key(ptrdiff_t operand, memory_order order = memory_order_seq_cst) noexcept;
Requires: T shall be an object type, otherwise the program is ill-formed. [ Note: Pointer arithmetic on void* or function pointers is ill-formed. — end note ]
Effects: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed to by this and the given operand. Memory is affected according to the value of order. These operations are atomic read-modify-write operations ([intro.multithread]).
Remarks: The result may be an undefined address, but the operations otherwise have no undefined behavior.
T* operator op=(ptrdiff_t operand) volatile noexcept;
T* operator op=(ptrdiff_t operand) noexcept;
T operator++(int) volatile noexcept;
T operator++(int) noexcept;
Effects: Equivalent to: return fetchadd(1);
T operator--(int) volatile noexcept;
T operator--(int) noexcept;
Effects: Equivalent to: return fetchsub(1);
T operator++() volatile noexcept;
T operator++() noexcept;
T operator--() volatile noexcept;
T operator--() noexcept;