FWIW, the easiest way to understand atomic operations is to think about interrupts.
An atomic operation cannot be interrupted, generally because it is a single CPU instruction. Now, some processors support read/modify/write as an atomic operation, but you can be limited to read/write. So the typical use for an atomic is as a control flag, either between threads, or between foreground and an interrupt handler.
Atomics tend to work best when there is one writer, one or more readers. Think about it this way, imagine I have UI module that sets boolean option flags. If I have just one writer, there is no reason to lock. The writer can assemble all the boolean flags into a single value and then write to the atomic. The flags will always get updated as a group, so the values are in sync to each other.
On the other end, say I have a bunch of threads that periodically read the atomic. Their local copy might get stale between polls, but the read will never be corrupt. It will get the flags as last updated.
This can be very low processing overhead, but wrapping things up like Vinn suggests doesn’t really have to cost you anything either. A configuration class could hide the use of the Atomic and the bit flags internally. Judicious use of inline functions could keep it screaming along…
All this said, it is always concerning when I hear concerns about ‘too much locking and unlocking’. If you have a bunch of threads that are contending for certain shared resources a lot, then the stage is set for performance and threading problems. Things like race conditions can occur, and you are already insuring that everyone is going to bottleneck getting certain information. If such a situation can’t be avoided, atomics can be a performance boost. But, typically, the better solution is to rethink how work is divided up and how and why information is shared.