Use SpinLock over accessing a vector

In our system we have an array that can be changed and read in the message thread, and is read-only in the audio thread.
The vector is stored as a shared_ptr. In the getData function, a spin_lock will be placed before creating a shared_ptr, like this:

shared_ptr<DataType> originalData;

shared_ptr<DataType> getData() const
{
    shared_ptr<DataType> vectorCopy;
    SpinLock::ScopedLockType lock (spinLock);

    // Do something to create a shared pointer of the vector...
    // ... 
    return vectorCopy;
}

void updateData()
{
    shared_ptr<DataType> newData = generateNewData();
    {
        SpinLock::ScopedLockType lock (spinLock);
        std::swap (originalData, newData);
    }
}

Given that the function getData can both be called in audio thread and message thread, is this an acceptable solution to use spinlock to prevent data race without potential audio drop out? Is there any way to measure the performance of those locks?

If you’re holding a shared_ptr under a spin lock, the performance bottleneck would probably be pretty low, as at worse the audio thread would would for the duration of the pointer swap, which is probably fine even under heavy contention if you’re only checking it once per buffer.

Having that said you have two other problems there:

  1. Your lock isn’t really protecting anything that newData points to, so you need to make sure that generateNewData() is creating a deep copy of all the data and not something that contains pointers/references that will create more data races.

  2. When the audio thread is ending the use of the object it received in getData(), it will reduce the ref count, and if by then the message thread has already generated new data, that ref count will reduce to 0 and de-allocate on the audio thread.

1 Like

Thank you so much for your feedback. Totally makes sense!
generateNewData in reality just creates a vector inside updateData, and use make_shared to create a shared_pointer out of it. But I’ll surely have the deallocation issue with point 2.