What's the simplest way to make a circular buffer using JUCE?

#1

Hey everyone I’m trying to create a nice and simple circular buffer

this is how it’s done using boost:

#include <queue>
#include <boost/circular_buffer.hpp>

typedef std::queue<my_type, boost::circular_buffer<my_type>> my_queue;
const int n = 3;  // this sets the buffer size
...
my_queue q(boost::circular_buffer<my_type>(n));
q.push(1);
q.push(2);
q.push(3);
q.push(4); // queue now contains 2,3,4

Is there an equally simple way to achieve this outcome using JUCE ?

I had a poke around at stuff like AbstractFifo but that seemed to be overly complicated for a relatively simple procedure.

#2

Here the oldschool method with running sum… C-style… you can make a nice class of it ofcourse, but I thought it would be nice to show the bare bones…

#define bsize 256
float buffer[bsize];
int bufferP=0;
float running=0;

void pushFIFO(float in_newV)
{
    const float old=buffer[bufferP]; 
    running-=old;
    const float newV = fabs(in_newV);  // fabs for magnitude kind of stuff ofcourse
    buffer[bufferP] = newV;
    running+= newV ; 
    bufferP++;
    bufferP%=bsize;

}

void resetFIFO()
{
 for(int i=0; i<bsize;i++) buffer[i]=0; // or use memset()
 running=0;
 bufferP=0;
}
2 Likes
#3

Thanks for the reply, that’s a really nice solution. So I guess there’s nothing within JUCE that can condense that down into smaller objects?

I was previously doing this:

	std::queue <int> sampleQueue;
	int sampleQueueSize = 2100;

...

        // sampleValue is being set at the callback to this function

		if (sampleQueue.size() <= sampleQueueSize)
		{
			sampleQueue.push(sampleValue);
		}
		else
		{
			sampleQueue.pop();
			sampleQueue.push(sampleValue);
		}

But yeah your way is a lot more robust. Cheers :slight_smile:

#4

std::queue is not a good choice for audio DSP code. (It is too complicated internally.)

#5

What are you using it for? If it’s for audio samples, you most likely want to write/read in blocks of samples rather than one at a time.

#6

@Anima @Xenakios I agree it would be cumbersome for audio but it’s actually not audio - it’s sampling a slider value and retaining the data in a queue.

However, I’ve ended up swapping queue for deque because it has far more options in terms of data access.

#7

std::deque isn’t really an improvement. std::queue sucks partly because it by default internally uses std::deque. Anyway, if you don’t need really high real time performance, pretty much anything will work.

#8

Ah ok thanks for the info. I’m read / writing to the deque at like 4000Hz - is that gunna bring up problems?

If so - what would a high real time performance solution be?

#9

Something that doesn’t do surprising memory allocations and has the data elements contiguously in memory. So, a circular buffer (that doesn’t allow the buffer to grow)… :slight_smile:

#10

There are more questions to be asked about the use case:

  • do you want the events to be collated? Or does each change matter?
  • If each change matters, do you any synchronisation, e.g. attaching a timestamp?
  • wouldn’t it be best to use the mechanisms of Midi controllers already available?
#11

If your buffer size is 256 or 65536, you can avoid the expensive % operation by using uint8 or uint16 and letting it overflow back to 0.

(Edit: This isn’t in reply to Xenakios, but to your original post)

1 Like
#12

I would advise against any micro-optimisation in the design stage, but while we’re at it, you can do a modulo for any power of two number in one cycle:

e.g. uint8 modulo 16:

uint8_t counter;
uint8_t size = 16;
auto cmod = counter & ~(size-1);
4 Likes
#13

I needed this for a project. it’s a circular fixed delay buffer with a couple extra features (sum, absolute max).

template<typename FloatType>
struct FixedDelayBuffer
{
    static_assert(std::is_floating_point<FloatType>::value == true,
                  "template type must be a float or double"); 

    explicit FixedDelayBuffer(int size)
    {
        size = size + 1;
        array.ensureStorageAllocated(size);
        array.insertMultiple(0, {}, size);

        write = array.size() - 1;
    }
    FloatType readSample() const noexcept { return array[read]; }
    FloatType writeSample(FloatType sample)
    {
        auto discarded = array[write];
        array.setUnchecked(write, sample);
        ++write;
        if( write > array.size() - 1 )
            write = 0;
        ++read;
        if( read > array.size() - 1 )
            read = 0;
        updateMax();
        updateSum();
        return discarded;
    }
    const FloatType findMax() const { return max; }
    const FloatType findSum() const { return sum; }
private:
    Array<FloatType> array;
    int read = 0;
    int write;
    FloatType max = 0;
    FloatType sum = 0;

    void updateMax()
    {
        max = std::numeric_limits<float>::min();
        for( auto x : array )
            max = jmax(max, abs(x) );

    }
    void updateSum()
    {
        sum = 0;
        for( auto x : array )
            sum += x;
    }
    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR(FixedDelayBuffer)
};
#14

There’s nothing better than a wasting a good day unnecessarily micro-optimising something.

This is ours using a handy juce::nextPowerOfTwo function. Where buf is any kind of raw memory-type buffer (juce::Array square bracket operators include an additional safety check…)

SingleDelay (int bsize)
    :
    buf (nextPowerOfTwo (bsize)), // size
    mask (nextPowerOfTwo (bsize) - 1),
    ptr (mask) {}

void put (float inSignal)
{
    jassert (mask > -1);
    data[mask & --ptr] = inSignal;
}

float get (int delay) const
{
    jassert (mask > -1);
    return data[ptr + delay & mask];
}
1 Like
#15

Nice one!
For trivial types like float that is a nice solution, but with Array::operator[](int) always be weary, it returns by value and not by reference (i.e. returns a copy), which bit me a few times, when I was starting with JUCE.

#16

I’d be very surprised if compiler didn’t already do this optimisation for you when doing a power of 2 modulo, so better just to write x % 32768 for readability’s sake.

Something I miss from my 68k assembler days is hand optimising algorithms to get the smallest number of CPU cycles possible, but I used to be shockingly bad at commenting and my code would be unreadable after a time away from a particular piece :joy:

#17

I guess it depends, if the modulant is a constexpr. Otherwise, how could the compiler do that?

#18

I don’t know if it’s fixed but it also uses memcpy rather than move operators when expanding the array which can lead to some fun behaviour :slight_smile:

I’d say prefer std::vector<> unless you are certain you want Array<>.

#19

+1

although, IIRC there is a check for “is trivially copyable”, and if that is false, it doesn’t use memcpy any longer.

#20

@matkatmusic thanks so much, this kind of example is exactly what I was after :slight_smile: