I have fixed the issue now.
However, my delay time slider is not effecting the actual delay output. I’m not sure if this is an issue with int and float value calculations equalling 0?
I’m not too sure what I’m missing, but there must be some gaps in my knowledge.
Here is my understanding of the concepts as it stands:
- I need to be copying the samples from one buffer into a delay buffer of size MAX_DELAY * SR.
MAX_DELAY * SR = Delay buffer size
my delay buffer is 2 * SR (eg 44100) = 88200 samples (I want it to only be 150/ms worth of samples, to make a short textural delay, but I thought instead I could create a buffer of 2 seconds and then simply have the parameter value go from 0-150m/s due to the issues in the thread above)
I’m then filling this with 64 samples at a time, at the end of each buffer the write pos goes to buffer index+1 so, for example the second buffer starts writing into the delay buffer at sample no 64 (starting at index 0).
In the constructor I have initialised the buffers to a nullptr so I can test that they’re empty in the destructor and clear them if not before the delay begins.
I have also set initial arguments for my Feedback, Write head, Circular buffer length, DT, DT smoothed and added my desired parameters:
{
mCircularBufferLeft = nullptr;
mCircularBufferRight = nullptr;
mFeedbackLeft = 0; // instantiate the values to 0
mFeedbackLeft = 0;
mDelayWriteHead = 0; // MD default delay time to 0 samples
mCircularBufferLength = 0; // MD until further notice
mDelayTimeInSamples = 0;
mDelayPlayHead = 0;
mDelayTimeSmoothed = 0;
// mDryWet = 0.3;
// mFeedbackAmount = 0.8; // used these before creating the parameters to test the code
addParameter(mDelayTimeParameter = new AudioParameterFloat("delaytime",
"Delay Time",
0,
150.0f,
20.0f));
addParameter(mFeedbackParameter = new AudioParameterFloat("feedback",
"Feedback",
0.0f,
0.99f,
0.1f));
addParameter(mDryWetParameter = new AudioParameterFloat("drywet",
"Dry / Wet",
0.0f,
1.0f,
0.5f));
addParameter(mDelayOutputLevelParameter = new AudioParameterFloat("output",
"Output Level",
0.0f,
1.0f,
1.0f));
}
In the constructor I have assigned these to a nullptr, so I can test that there is nothing in the buffer upon loading the plugin.
test in destructor:
if (mCircularBufferLeft != nullptr)
{
delete [] mCircularBufferLeft;
}
if (mCircularBufferRight != nullptr)
{
delete [] mCircularBufferRight;
}
-
I have a read head, which is the sample at [index] in the main buffer, which I am copying into my delay buffer at [index].
-
The delay time is the distance in samples [index] between the read head and the write head positions set by:
mDelayTimeInSamples = mDelayTimeSmoothed * sampleRate;
in my prepare to play
-
Other than in ptp I create a variable mLastSampleRate so I can call the sample rate in the process block later to calculate DT smoothed.
-
Then initialise the length of each buffer (after Daniels help I’m not sure cast to int where I’m running into issues at my DT parameter values are float)? also, I don’t fully understand the latter section in the code:
zeromem(mCircularBufferLeft, mCircularBufferLength * sizeof(float));
- Test that the buffers point to the empty pointer then create them at set length
void AsdbasicDelayAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{
float mLastSampleRate = getSampleRate();
// MD create the object and its memory space
if (mCircularBufferLeft == nullptr)
{
mCircularBufferLeft = new float [static_cast<int>(sampleRate * MAX_DELAY_TIME)];
zeromem(mCircularBufferLeft, mCircularBufferLength * sizeof(float));
}
if (mCircularBufferRight == nullptr)
{
mCircularBufferRight = new float [static_cast<int>(sampleRate * MAX_DELAY_TIME)];
zeromem(mCircularBufferRight, mCircularBufferLength * sizeof(float));
}
//mDelayWriteHead = 0; MD just in case things break
mCircularBufferLength = (int)sampleRate * MAX_DELAY_TIME;
float mDelayTimeSmoothed = *mDelayTimeParameter;
mDelayTimeInSamples = mDelayTimeSmoothed * sampleRate;
}
And finally here is my process block:
I have seen implementations creating a AudioBuffermDelayBuffer in the .h and then just using channel as an argument, but my teacher showed us the concept using buffers for each channel.
void AsdbasicDelayAudioProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
{
ScopedNoDenormals noDenormals;
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
buffer.clear (i, 0, buffer.getNumSamples());
float * leftChannel = buffer.getWritePointer(0); //gets the sample position (index) of the buffer for each channel
float * rightChannel = buffer.getWritePointer(1);
for (auto i = 0; i < buffer.getNumSamples(); i++)
{
mDelayTimeSmoothed = mDelayTimeSmoothed - 0.2 * (mDelayTimeSmoothed - *mDelayTimeParameter);
mDelayTimeInSamples = mDelayTimeSmoothed * mLastSampleRate;
mDelayPlayHead = mDelayWriteHead - mDelayTimeInSamples;
if (mDelayPlayHead < 0)
{
mDelayPlayHead += mCircularBufferLength;
}
float delay_sample_left = mCircularBufferLeft[(int)mDelayPlayHead]; // MD store the delay values
float delay_sample_right = mCircularBufferRight[(int)mDelayPlayHead]; // MD store the delay values
mFeedbackLeft = delay_sample_left * *mFeedbackParameter; // 0.8 is feedback coefficient
mFeedbackRight = delay_sample_right * *mFeedbackParameter;
mCircularBufferLeft [mDelayWriteHead] = leftChannel[i] + mFeedbackLeft;
mCircularBufferRight[mDelayWriteHead] = rightChannel[i] + mFeedbackRight;
buffer.setSample(0, i, leftChannel[i] * ((1 - *mDryWetParameter) + mFeedbackLeft * *mDryWetParameter) * *mDelayOutputLevelParameter);
buffer.setSample(1, i, rightChannel[i] * ((1 - *mDryWetParameter) + mFeedbackRight * *mDryWetParameter) * *mDelayOutputLevelParameter);
mDelayWriteHead++; //Moves the position of the right head along to the next sample
if (mDelayWriteHead > mCircularBufferLength) // Ensures we wrap around the buffer
{
mDelayWriteHead = 0; // wrap around
}
}
}
-
Here I create a pointer to the buffers for each channel
-
Then a for loop to process each sample in the channels
-
I set my ST smoothed which is the value from the parameter -0.2 * (DT smoothed - *pointer to param value)
-
Initialise my DT in samples to DTsmoothed * SR variable in the prepare to play (not sure If this is callable here) but calling getSampleRate can’t be done in processblock as far as I understand.
-
I set the playhead position in the delay buffer to be the write head - the Delaytime in samples so its 6,615 samples behind at a DT of 150m/s and 44100 SR.
-
I’m not 100% sure on the += test but I believe it means when the playhead reaches < 0 is adds the buffer length to the playhead so we maintain the distance between the write and playhead?
-
Then I process the delay buffers with feedback, dry/wet, and outputlevel for each sample
-
increment the play head
-
wrap around if the play head exceeds the circ buffer length
Sorry for the long afternoon read, but I really want to figure out where I’m going wrong!! plus cement my understanding of the principles before my hand in and presentation.
Take care,
Zander