Discuss: CircularSingleWriterSingleReaderLockFreePointerFIFO

I’m sure thats true, in the future i will have a look to your library, i’m just fiddling around, and try to learn how things are working :slight_smile:

there is a bug in it, in calculating the free space, will have a look later. :frowning:

ahh yes, there should be no writing when the freeSpace is smaller or equal 1. Cause if Start and End index is the same, the current state is ambiguous. Will check this later.

[quote]if (freeSpace <= 1)
	{
		return false;
	}
[/quote]

This is your problem right here. There will never be anything simple about writing correct, concurrent code. Herb Sutter expresses this rather well:

This is why I wrote the concurrent code in VFLib and published it under the MIT License. It is thoroughly debugged, optimized, and correct. So that there is a reliable building block to develop concurrent systems.

Vinnie i don’t have a problem, you don’t have a problem, we all have no problems. I have less requiremts here (single writer/single reader), and you have a nice library. Thats ok, and nice, but you don’t have to tell me that every second post. Let me be so stupid to create a solution which may fit my requirements perfectly. And if you say there is a race condition or something wrong with my code, i would be happy if you can tell me that and why.

(there thing with the freespace i have to correct first, but no time yet)

I have a small portion of code (not a library with 100 of files), the only thing i wanted is, that somebody on the forum disussing me about that, instead i have pseudo discussion…

[quote=“TheVinn”]

http://en.wikipedia.org/wiki/Not_invented_here[/quote]
Vinnie, believe me or not, you really turned me into to a bad mood this evening! :x
Posting links, about syndroms, saying me i have problem, sorry i take this personal, really. How many posts from you in this topic refer to my code example. None

[quote=“chkn”]The Circular-Buffer works exact the way in AbstractFifo, do you notice the atomic indexes to the start end the end?
And about the array, i’m just using array template, its just a more comfortable version of a dynamic array with “new”, there is really nothing fancy about that. (i can also use setUnchecked, to make it more similar to an pure c++ dynamic array)[/quote]

Assuming Jules doesn’t change the way the Array class works internally, since your code makes assumptions.

Just having Atomic markers is not enough. Consider this in your write function:

const int vs = validStart.get(); const int ve = validEnd.value; const int freeSpace = ve >= vs ? (arrayOfPointers.size() - (ve - vs)) : (vs - ve);

So, imagine this. After you set vs, this thread could be scheduled out. Now the reader could read the thread. Is vs still valid? Now the thread runs for a bit - you store ve, then the thread gets pre-empted again. Are vs and ve valid? Any relation to each other any more? And it goes on - your freespace line calculation will be random. Continue throughout your code. You need to consider the thread being pre-empted at any point.

It’s just logic, yes. As long as you know exactly what assembly instructions will be generated on every platform and compiler the code might run on.

On a side note, checking arrayOfPointers.size() is just wonky. You’ve built a fundamentally fixed system, and you’re checking it like a dynamic system. Weirdly, that function call has probably injected a MemoryBarrier that will prevent the compiler from re-ordering that statement, for instance by inserting the validStart.get call right into your freespace calculation.

I can tell you why I’m not going to post exact code examples that work the exact way you want (not using the helpful tools you refuse to make use of). It is that I’m not qualified. I did enough research, and a lot of troubleshooting complex multi-threaded systems, to know when it’s outside my area. I’d rather ‘stand on the shoulders of giants’ than spend all the time getting my head around the current research and hoping I didn’t accidentally put one statement in the wrong order.

If you really want to do it from scratch, do it from scratch. Don’t use the Juce atomic classes or Arrays, use the platform primitives. Your insistence in not using the provided high level FIFOBuffer, but using the inappropriate high level Array class, is illogical.

Bruce

[quote=“chkn”] I don’t want to write

[code] prepareToWrite (1, start1, size1, start2, size2);

if (size1 > 0)
{
timesBuffer [start1] = inMsg;
written = 1;
}
else if (size2 > 0)
{
timesBuffer [start2] = inMsg;
written = 1;
}

//jassert (written > 0);

finishedWrite (written);[/code]
instead I wanna write

write(inMsg) [/quote]

I’m sorry, I missed this Straw Man gem, or I wouldn’t have wasted my time trying to help you. Obviously, the only way to get that code down to one line is to do it your way. Continue.

Bruce

It is certainly not my intention to upset you. But let’s start with your original post:

[quote=“chkn”]Please Discuss:
I missed a simple fifo to pass pointers between threads[/quote]

What you’ve been hearing from me and Bruce is that a “simple fifo to pass pointers between threads” does not exist. Concurrent objects are difficult to code, difficult to debug, and difficult to maintain. Herb Sutter said it best in his quote (does his opinion count for anything?)

You said that you don’t need memory management, or a signaling mechanism, but then you contradicted yourself:

Deleting an object pulled from the queue is memory management.

Now in your pseudocode you write:

So, it turns out you don’t really want a FIFO, you only care about the “latest” version of some object, presumably a state. Let me point you to my post from two years ago:

Best practice for threading and locking.

The code from the post:

    // Simple Functor to call update()
    template<class Object>
    struct call_update
    {
      call_update( Object& object )
        : m_object( object )
      {
      }

      void operator()()
      {
        m_object.update();
      }

    private:
      Object& m_object;
    };

    //--------------------------------------------------------------------------

    // Allows data produced by one thread to be safely accessed
    // by another thread without using a lock, by making copies.
    // To guarantee that the reader always makes progress towards
    // a new state during an update, and to provide assurances
    // that the duration of the lock is constant, three copies
    // of the data are required.

    // Data must be copy constructible and support assignment.
    template<class Data>
    class shared_data
    {
    public:
      // Create the SharedData object from the initial state.
      // - Data must be copy constructible and support assignment.
      // - Execution time is O(n) on copying Data.
      explicit shared_data( const Data& data )
        : m_copy1( data )
        , m_copy2( data ) // so that Data does not require a default ctor
        , m_copy3( data )
        , m_read( &m_copy1 )
        , m_next( &m_copy2 )
        , m_write( &m_copy3 )
        , m_bUpdate( false )
      {
      }

      // Place new information in the SharedData. The producing
      // thread usually calls this.
      // - The lock is held for O(1)
      // - Execution time is O(n) on copying Data.
      shared_data& operator=( const Data& data )
      {
        // The copy is performed outside the lock
        *m_write = data;

        m_mutex.Lock();

        // Tell the reader to pick it up
        std::swap( m_next, m_write );
        m_bUpdate = true;

        m_mutex.Unlock();

        return *this;
      }

      // Boost convention
      void reset( const Data& data )
      {
        this->operator=( data );
      }

      // Retrieve new information. The method used for signaling
      // the reading thread that new data is available is up to caller.
      // - The lock is held for constant time.
      // - Execution time is O(1)
      bool update()
      {
        bool bChanged = false;

        if( m_bUpdate )
        {
          m_mutex.Lock();
          if( m_bUpdate )
          {
            // Pick up the next state. This guarantees progress.
            std::swap( m_read, m_next );
            m_bUpdate = false;
            bChanged = true;
          }
          m_mutex.Unlock();
        }

        return bChanged;
      }

      // Return a Functor that calls update()
      call_update<shared_data<Data> > deferred_update()
      {
        return call_update<shared_data<Data> >( *this );
      }

      // Accessors for the reader's view of the Data.
      // The object remains valid until the next call to update().
      // - Execution time is O(1)

      Data& operator*()
      {
        return *m_read;
      }

      Data const& operator*() const
      {
        return *m_read;
      }

      Data* operator->()
      {
        return m_read;
      }

      const Data* operator->() const
      {
        return m_read;
      }

    private:
      Data m_copy1;
      Data m_copy2;
      Data m_copy3;
      Data* m_read;  // reader's current view of the data
      Data* m_next;  // what the reader should select on the next Update()
      Data* m_write; // safe place for the writer to copy the data
      bool m_bUpdate;
      Vf::Mutex m_mutex;
    };

This will achieve your goal without any FIFO at all. As written it is not lock-free but the lock is held for a fixed time independent on the size of the object.

You asked people to discuss this with you a few times, and your request was fulfilled. There’s no reason to get upset because it isn’t what you wanted to hear!

with simple i also mean correct working! You taking this word “simple” to serious, with simple i don’t mean broken or inaccurate, of course. I mean simple as possible!

That is exact the solution, i had before. And if you read my posts somewhere in this thread, you may notice i don’t want any CriticalSections/Locks . And there IS a reason for it.

Believe me, thats a real issue, not fantasy! Of cause you can say, thats not true, but its real! (at least on windows)

Theres is good reason to be upset, if you have the feeling that people doesn’t really go in to detail of the implementation, and if you want to clarify, you get an “if you don’t accept that someone knows is better”, “i have a solution which is perfect” - or get: you have “problem” or a “not invented here”-syndrom. I think its a good reason to be upset!

[quote=“TheVinn”]chkn wrote:
if the queue is full, additional objects will be deleted…a second fifo, to free them

Deleting an object pulled from the queue is memory management.
[/quote]
Yes, the FIFO has no memory management,the second class “ConsumerProducerManager” owns the pointer (as i stated before), yes thats memory management. (Its a demonstration how two FIFOs can be used to achieve that all memory allocation/freeing is only on the producer thread)

Sorry i read Vinnies response first, now to Bruce :slight_smile:

First i want to say THANKS, thats exactly the kind response, this thread was intended for.
We need vs, to calculate the free-space (and because we have only one writer per definiton, free space cannot be smaller as long this function runs)
So if freespace is bigger than 1 (because we need to prohibit the ambiguous state freespace==0), there will be enough space to write, because, for writing we only need the reliable endPosition.
So the worst thing that can happen, if the queue is nearly full, and there is a second read-thread that read at the same time, the write method needs a second attempt (because the atomic start-position /* edit 2012/11/13 */ will be updated at the end of the read operation) (but thats perfectly ok, because if the queue is full - it is full, thats the exception)
If we find a mistake here, the chance is high that AbstractFifo has a similar problem.