The juce::ReadWriteLock is great as a general purpose lock and certainly has its place in a library. But the usage pattern of the ReadWriteLock used in GlyphCache is perfectly suited for the implementation I provided in VFLib which has a lot of restrictions, but incredible performance. Its optimized for many readers, infrequent writers, and doesn’t have to maintain any sort of thread list. In fact, compare the code for enterRead():
vf::ReadWriteMutex::enterRead()
void ReadWriteMutex::enterRead () const noexcept {
for (;;) {
++m_readers;
if (m_writers>0)
{
--m_readers;
CriticalSection::ScopedLockType lock (m_mutex);
}
else
break;
}
}
Acquiring a read lock in the fast path is one atomic increment and one integer comparison. Now compare:
juce::ReadWriteLock::enterRead()
void ReadWriteLock::enterRead() const noexcept
{
const Thread::ThreadID threadId = Thread::getCurrentThreadId();
const SpinLock::ScopedLockType sl (accessLock);
for (;;)
{
jassert (readerThreads.size() % 2 == 0);
int i;
for (i = 0; i < readerThreads.size(); i += 2)
if (readerThreads.getUnchecked(i) == threadId)
break;
if (i < readerThreads.size()
|| numWriters + numWaitingWriters == 0
|| (threadId == writerThreadId && numWriters > 0))
{
if (i < readerThreads.size())
{
readerThreads.set (i + 1, (Thread::ThreadID) (1 + (pointer_sized_int) readerThreads.getUnchecked (i + 1)));
}
else
{
readerThreads.add (threadId);
readerThreads.add ((Thread::ThreadID) 1);
}
return;
}
const SpinLock::ScopedUnlockType ul (accessLock);
waitEvent.wait (100);
}
}
Also, correct me if I am wrong but when a juce::Array goes from numUsed == 1 to numUsed == 0 it always calls “minimiseStorageOverheads()” which frees the array:
if ((numUsed << 1) < data.numAllocated)
minimiseStorageOverheads();
I’m not complaining - the Juce code works, and I haven’t profiled to determine if this is even a problem. But I just wanted to point out that every call to drawGlyph causes an extra memory allocation and deallocation when it doesn’t have to.