Using ReadWriteLock in low latency situations


#1

Currently I use a normal Critical Section to protect concurrent accesses to the same data-structures from threads. Because of priority inversion and other issues, i think to optimize the behaviour by using ReadWriteLocks, because the most accesses are read-only accesses, which theoretically should benefit from a ReadWriteLock.

Potential problems I see is the readersThreads array which allocates memory when adding new elements.
The memory should be pre-allocated, this could be done with ensureStorageAllocated in the Constructor.

The other issue is, if you delete an element, it will reallocate memory. The Array/ArrayAllocationBase should store the minNumElements (which was set by ensureStorageAllocated) , and should never reallocate if the number of current-used elements is smaller then minNumElements!

// not checked yet, Pseudo Code
void ensureAllocatedSize (const int _minNumElements)
{
        minNumElements=_minNumElements;   // store minNumElements
        if (minNumElements > numAllocated)
            setAllocatedSize ((minNumElements + minNumElements / 2 + 8) & ~7);
}

void shrinkToNoMoreThan (const int maxNumElements)
{
        if (maxNumElements < numAllocated  && minNumElements < maxNumElements)  // new allocated size has to be bigger the minNumElements
            setAllocatedSize (maxNumElements);
}

WaitableEvents which are used, do they work with the same performance as a normal CriticalSections?
Are there other performance related issues I don’t see?

PS:
I know the best solution is to avoid locks in low latency situations at all, but in the current situation it would be enough to use readWriteLocks, if the are at least as (nearly) fast as normal CriticalSections.


#2

Yeah, that class was never designed for low-latency situations.

But TBH if you’re in a situation where doing an allocation would cause problems for you, then you really shouldn’t be using any kind of lock at all!


#3

hehe, i knew you would say something like that. Currently i use a CriticalSecion and hold the lock as short as possible, which works fine so far (Memory Allocations i do outside the locks).
The only exception is when i read the whole project-data at once (when saving) , it holds the lock for a relatively long time, and because this is a read-only access, and the low latency access is also read only, a ReadWriteLock, if it is at least as (nearly) fast as normal CriticalSection, would improve the current behaviour.

Apart from that, what do you think about a minimum Size in the ArrayAllocationBase (as mentioned in the first post), to avoid reallocation, if you know the size of an array is oscillating.


#4

i’m just wondering if it would be possible to build a RWLock without any Events and Arrays, just use CriticalSecitions?
If somebody finds a deadlock or other wrong states in code, please tell me!

[code]
// Pseudo Code
class ReadWriteLock
{
ReadWriteLock() : numberReaders(0)
{
};

	void enterRead()
	{
		ScopedLock sl(accessCS)
		{	
			if (numberReaders == 0)
			{
				writeCS.enter();
			};
			numberReaders++
		};
	};
	  
	void exitRead()
	{
		ScopedLock sl(accessCS)
		{	
			numberReaders--
			if (numberReaders == 0)
			{
				writeCS.exit();
			};
		};
	};


	void enterWrite()
	{
		writeCS.enter();
	};
	
	
   void exitWrite()
   {
		writeCS.exit();
   };

  CriticalSection accessCS;
  CriticalSection writeCS;
  int numberReaders;

};[/code]


#5

oops, the writeCS access int the enter/exitRead should be thread independent, mhhh have to rethink …


#6

No, it’d mean increasing the footprint of every Array object to hold that size, and it’s not useful enough to warrant that.

I might add your suggestion about an initial size, but probably less than 32, it seems unlikely that many people would ever have that many simultaneous readers.


#7

not always, only if you define it

something like this: void ensureAllocatedSize (const int _minNumElements, bool ensureMinimunSizeAfterRelocating = false)


#8

And where would that value get stored?


#9

…as member in the Array or ArrayAllocationBase-Class ? (or use a more complicated mechanism like the DummyCriticalSection, if you worry about the additional memory)


#10

I don’t see the feature as important enough to justify the extra size or complexity, sorry.