Atomic int and bit operations


#1

I need store a lot of bool options, those options are accessed by different threads. I wanted to implement this without locking. I was wondering if bit operations on a Atomic would be possible, i saw some articles on the web that there are interfaces for that i was wondering if it’s possible with JUCE. I don’t see any of the bit operators overloaded in the Atomic class, i was wondering if i can implement them myself using the public member of the Atomic class

 /** The raw value that this class operates on.
        This is exposed publically in case you need to manipulate it directly
        for performance reasons.
    */
    volatile Type value;

i was also wondering will it be faster then normal bit operations on a normal int with locking. I’m aware that this might be a weird question but i’m trying to wrap my head around the Atomic stuff and i was just wondering can this be used for such puproses efficently.


#2

If you’re having performance problems because your app spends too much time checking booleans well hmm you might have bigger issues than Atomics can fix!


#3

I don’t have a problem yet (i’m trying to implement this the best way possible) i just ended up with loads of bools in different places, i’d rather have a clean interface with bit options and learn something new in the process.


#4

You should be encapsulating all your queries to settings in member functions anyway, thats the cleanest interface. Checking a data member destroys the principle of information hiding - it makes your calling code dependent on the method used to store the information (which is bad).


#5

This is what i tried but i ended up with loads of get/set methods per each class/method that are just one (two with locking) lines of code, i thought it would be cleaner with one set/get that passes one INT and i just extract the bit when i need to at any place in the code of any class. The bit extraction is simple and can be wrapped in some clean macros.

Like i wrote i just want to learn something that’s all.


#6

FWIW, the easiest way to understand atomic operations is to think about interrupts.

An atomic operation cannot be interrupted, generally because it is a single CPU instruction. Now, some processors support read/modify/write as an atomic operation, but you can be limited to read/write. So the typical use for an atomic is as a control flag, either between threads, or between foreground and an interrupt handler.

Atomics tend to work best when there is one writer, one or more readers. Think about it this way, imagine I have UI module that sets boolean option flags. If I have just one writer, there is no reason to lock. The writer can assemble all the boolean flags into a single value and then write to the atomic. The flags will always get updated as a group, so the values are in sync to each other.

On the other end, say I have a bunch of threads that periodically read the atomic. Their local copy might get stale between polls, but the read will never be corrupt. It will get the flags as last updated.

This can be very low processing overhead, but wrapping things up like Vinn suggests doesn’t really have to cost you anything either. A configuration class could hide the use of the Atomic and the bit flags internally. Judicious use of inline functions could keep it screaming along…

All this said, it is always concerning when I hear concerns about ‘too much locking and unlocking’. If you have a bunch of threads that are contending for certain shared resources a lot, then the stage is set for performance and threading problems. Things like race conditions can occur, and you are already insuring that everyone is going to bottleneck getting certain information. If such a situation can’t be avoided, atomics can be a performance boost. But, typically, the better solution is to rethink how work is divided up and how and why information is shared.