Problem with multiple input processing AudioSources [solved]


#1

Hi everybody, I’ve got a problem concerning some AudioSources processing input data and returning it to the same outputs.

I have several (mono) audio inputs which i want to process (gain, delay aso) according to parameters given via midi and return every mono input on every active output.

So I create several AudioSources according to the number of active inputs in my AudioCallback::audioDeviceAboutToStart and add them to a MixerAudioSource. The Mixer runs via an AudioSourcePlayer to the audio device.

When for testing I create sine waves in my AudioSources (to all outputs), everything works fine. One source copying a specified in to all outs dito. But with two sources copying different ins to all outs, the audiocard’s output mixer shows an overrun in all outputs and i hear nothing.

This may be because I got something utterly wrong, but I don’t see where. So I’d apreciate any hint…

Thx, Ingo


#2

Can u give more details of the problem.
How many inputs u have. what u r doing exactly.


#3

I am testing at the moment, so the number of channels is quite limited (2ins, 4outs).

My AudioSources are created and added to the mixer in AudioIODeviceCallback::audioDeviceAboutToStart according to the number of active input channels. The constructor of the AudioSources includes a parameter which channel they should process (one source for every input).
In AudioSource::getNextAudioBlock I copy this channel into a local AudioSampleBuffer (in which data should be processed later) and then copy it back to all channels of the buffer given to getNextAudioBlock. There are more active outs than ins, so I understand that all channels in the buffer given back are output data.

At the moment this should simply loop back all active inputs on all active outputs simultaneously. And it works for one input (no matter which one), but shows the mentioned overrun and makes no sound when more than one input is active. It’s not just a normal overrun due to adding the signals; I have different signals on each input and their level on the input is at -15dB…

In the end it should eq, level and delay (perhaps even add a room simulation) to as many inputs as possible to 128 outputs. I know that JUCE is limited to 64 channels at a time, but Jules said he would change this and if I need it earlier I hope that hacking this limitation isn’t too complex.


#4

Hi everybody.

I have not yet found the problem, so perhaps it’s best if I post my code:

LoopbackSource.h

[code]#include “juce.h”

class LoopbackSource : public AudioSource
{
private:
int channelToProcess;
int samplesPerBlockExpected;
double sampleRate;
AudioSampleBuffer* localAudioBuffer;

public:
LoopbackSource(int channelIndexToProcess);
~LoopbackSource();
void prepareToPlay (int samplesPerBlock,
double sampleR);
void releaseResources();
void getNextAudioBlock(const AudioSourceChannelInfo &bufferToFill);
};[/code]

LoopbackSource.cpp

[code]#include “LoopbackSource.h”

LoopbackSource::LoopbackSource(int channelIndexToProcess)
{
channelToProcess = channelIndexToProcess;
localAudioBuffer = new AudioSampleBuffer(1, 100);
}

LoopbackSource::~LoopbackSource()
{
delete localAudioBuffer;
}

void LoopbackSource::prepareToPlay( int samplesPerBlock,
double sampleR)

{
samplesPerBlockExpected = samplesPerBlock;
sampleRate = sampleR;

localAudioBuffer->setSize(1, samplesPerBlockExpected);

}

void LoopbackSource::releaseResources()
{
}

void LoopbackSource::getNextAudioBlock(const juce::AudioSourceChannelInfo &bufferToFill)
{
// clear local buffer
localAudioBuffer->clear();

// copy channel to be processed to our local buffer
localAudioBuffer->copyFrom(0,0, *bufferToFill.buffer, channelToProcess, bufferToFill.startSample, bufferToFill.numSamples);

// clear the reached in buffer
bufferToFill.clearActiveBufferRegion();

// copy our audioChannel to all channels for playback
for (int i=0; i<bufferToFill.buffer->getNumChannels(); i++)
	bufferToFill.buffer->copyFrom(i, bufferToFill.startSample, *localAudioBuffer, 0, 0, bufferToFill.numSamples);

}[/code]

My AudioCallback has an AudioSourcePlayer and a MixerSourcePlayer. In it’s constructor I initialize the audioDevice as seen in the demo. The callback just starts the sourcePlayer’s callback and in audioDeviceAboutToStart I create my LoopbackSources and add them to the mixer.

[code]AudioCallback::AudioCallback()
{
// some button stuff aso…

const String error ( _audioDeviceManager.initialise (	 0,	 // no input channels
																			0,	// no output channels
																			0,	// no XML settings
																			true,	// select default device on failure
																			String("*Hammerfall*")	// use Hammefall as default device
																			) );
	if (error.isNotEmpty())
	{
		AlertWindow::showMessageBox(	AlertWindow::WarningIcon,
										T("WaveFieldSynthesis"),
										T("Could not open an audio device!\n\n")+error );
	}
	else
	{
		// connect the mixer to the source player
		audioSourcePlayer.setSource(&mixerSource);

		// start the IO device pulling its data from our callback
		audioDeviceManager.setAudioCallback (this);
	}

}

void AudioCallback::audioDeviceIOCallback( const float **inputChannelData,
int totalNumInputChannels,
float **outputChannelData,
int totalNumOutputChannels,
int numSamples)
{
// pass the audio callback to our player source
audioSourcePlayer.audioDeviceIOCallback(inputChannelData, totalNumInputChannels, outputChannelData, totalNumOutputChannels, numSamples);
}

void AudioCallback::audioDeviceAboutToStart( AudioIODevice* device )
{
for(int i=0; igetActiveInputChannels().countNumberOfSetBits(); i++)
{
// create looppbackSource and add it to the mixer
mixerSource.addInputSource(new LoopbackSource(i), true);
}
}

void AudioCallback::audioDeviceStopped()
{
audioSourcePlayer.audioDeviceStopped();
mixerSource.removeAllInputs();
}[/code]

If anybody can see my fault, please tell me - I’m really stuck here. Have had some problems before using AudioProcessors and after finding no solution (see latest post here) I switched back to this.

As I said, I need to create multiple processors which process a mono input according to some fix data read from a xml-file and surround panning information received via midi to all active outputs.
Can somebody give me at least a hint, if my approach sounds promising? I understand AudioSources are capable of doing what I want.

Thanks for your help,

Ingo


#5

I can’t see where you’re calling the source’s prepareToPlay() method - maybe it’s just not been prepared properly?


#6

In the documentation it says the AudioSourcePlayer will call the prepareToPlay() of it’s source, the MixerAudioSource here.
The mixer’s prepareToPlay(), it says, will call it’s sources prepareToPlay() if the mixer is not running. Since I’m adding my sources in the AudioIOCallback::audioDeviceAboutToStart the mixer is stopped, isn’t it?

Just added a DBG-message to the AudioSources prepareToPlay() - it gets called and all variables are as expected.

And as I said, activating just one input channel works fine, the problem occurs when adding the second source…


#7

Well, it’s too complex for me to be able to see any obvious mistakes… If I was debugging this myself, I’d set some breakpoints and step through the entire audio callback process, watching what happens to the channels as each source operates on the data - that’s bound to show up what’s happening.


#8

Allright, I tried to follow the callback and had a look at the bufferToFill.buffer at my AudioSources by doing this before any processing:

DBG(String("\nChannel ")+String(channelToProcess)+String(" bufferToFill.buffer data before processing:")); for (int i=0; i<bufferToFill.buffer->getNumChannels(); i++) { DBG(String(*bufferToFill.buffer->getSampleData(i))); }

I got this with 4 ins and 4 outs:

Channel 0 bufferToFill.buffer data before processing:
0.0251590014
-0.0335490704
0.0419334173
-0.0993336439

Channel 1 bufferToFill.buffer data before processing:
-431602080
-431602080
-431602080
-431602080

Channel 2 bufferToFill.buffer data before processing:
-431602080
-431602080
-431602080
-431602080

Channel 3 bufferToFill.buffer data before processing:
-431602080
-431602080
-431602080
-431602080

It doesn’t depend on which input channel is active. The first active input receives correct data, the second active and all followers don’t. So I think, the input data in the callback is ok, right?
Just to be sure: The principle to use AudioSources for processing audio data is ok as I understood?


#9

Well, when you have a mixer source, it just passes the same i/o buffer to all its inputs, so if the first one overwrites the buffer with its output, that’s the data that the second one will get - maybe that’s what you’re seeing here?


#10

Since the first AudioSource is just copying one channel to all others, the buffer’s channels should all contain the same data after the first one has done its getNextAudioBlock. Which should be a valid sample (ranging from -1 to 1)…

But besides that, when the mixer is passing the same buffer to all sources, my whole approach with creating multiple audioSources for processing input is wrong. True?

So I would have to write my own Mixer which creates multiple buffers to pass to it’s sources and sum them up later… or use another approach…

Any better suggestions? As I explained above, I’ve got several mono-inputs and have to process them according to some received midi data. The mono-ins will be played then to multiple outputs, some inputs may be streamed to the same outputs.
Originally my work should have been to build the algorithm for choosing the outs, adjusting gain and delay and pitch the audio. But I’m totally stuck creating the proper environment in which to do this…

I really want to thank you for your help, Jules. Unfortunately I don’t have anyone else to bother :wink:


#11

well, if all none of your sources are producing any output, then you’d be ok. TBH the mixer should probably deal with this by keeping a cached copy of the input and passing it to each of its sources, I think I mainly just designed it for mixing outputs, rather than processing input data.


#12

Ok, I added an additional AudioSampleBuffer to the MixerAudioSource to cache the input data and now it works. Just in case anybody else shares this problem, I post my code for the InputCachingMixerAudioSource here:

InputCachingMixerAudioSource.h

[code]#include “juce.h”

//==============================================================================
/**
An AudioSource that mixes together the output of a set of other AudioSources and feeds them with input.

Same as MixerAudioSource, it just has an extra AudioSamplebuffer for caching the input data
in order to provide it to all attached AudioSources.
Input sources can be added and removed while the mixer is running as long as their
prepareToPlay() and releaseResources() methods are called before and after adding
them to the mixer. 

/
class InputCachingMixerAudioSource : public AudioSource
{
public:
//==============================================================================
/
* Creates a InputCachingMixerAudioSource.
*/
InputCachingMixerAudioSource();

/** Destructor. */
~InputCachingMixerAudioSource();

//==============================================================================
/** Adds an input source to the mixer.

    If the mixer is running you'll need to make sure that the input source
    is ready to play by calling its prepareToPlay() method before adding it.
    If the mixer is stopped, then its input sources will be automatically
    prepared when the mixer's prepareToPlay() method is called.

    @param newInput             the source to add to the mixer
    @param deleteWhenRemoved    if true, then this source will be deleted when
                                the mixer is deleted or when removeAllInputs() is
                                called (unless the source is previously removed
                                with the removeInputSource method)
*/
void addInputSource (AudioSource* newInput,
                     const bool deleteWhenRemoved);

/** Removes an input source.

    If the mixer is running, this will remove the source but not call its
    releaseResources() method, so the caller might want to do this manually.

    @param input            the source to remove
    @param deleteSource     whether to delete this source after it's been removed
*/
void removeInputSource (AudioSource* input,
                        const bool deleteSource);

/** Removes all the input sources.

    If the mixer is running, this will remove the sources but not call their
    releaseResources() method, so the caller might want to do this manually.

    Any sources which were added with the deleteWhenRemoved flag set will be
    deleted by this method.
*/
void removeAllInputs();

//==============================================================================
/** Implementation of the AudioSource method.

    This will call prepareToPlay() on all its input sources.
*/
void prepareToPlay (int samplesPerBlockExpected, double sampleRate);

/** Implementation of the AudioSource method.

    This will call releaseResources() on all its input sources.
*/
void releaseResources();

/** Implementation of the AudioSource method. */
void getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill);


//==============================================================================
juce_UseDebuggingNewOperator

private:
//==============================================================================
VoidArray inputs;
BitArray inputsToDelete;
CriticalSection lock;
AudioSampleBuffer tempBuffer;
AudioSampleBuffer cacheBuffer;
double currentSampleRate;
int bufferSizeExpected;

InputCachingMixerAudioSource (const InputCachingMixerAudioSource&);
const InputCachingMixerAudioSource& operator= (const InputCachingMixerAudioSource&);

};[/code]

InputCachingMixerAudioSource.cpp

[code]#include “juce.h”
#include “InputCachingMixerAudioSource.h”

//==============================================================================
InputCachingMixerAudioSource::InputCachingMixerAudioSource()
: tempBuffer (2, 0),
cacheBuffer (2, 0),
currentSampleRate (0.0),
bufferSizeExpected (0)
{
}

InputCachingMixerAudioSource::~InputCachingMixerAudioSource()
{
removeAllInputs();
}

//==============================================================================
void InputCachingMixerAudioSource::addInputSource (AudioSource* input, const bool deleteWhenRemoved)
{
if (input != 0 && ! inputs.contains (input))
{
lock.enter();
double localRate = currentSampleRate;
int localBufferSize = bufferSizeExpected;
lock.exit();

    if (localRate != 0.0)
        input->prepareToPlay (localBufferSize, localRate);

    const ScopedLock sl (lock);

    inputsToDelete.setBit (inputs.size(), deleteWhenRemoved);
    inputs.add (input);
}

}

void InputCachingMixerAudioSource::removeInputSource (AudioSource* input, const bool deleteInput)
{
if (input != 0)
{
lock.enter();
const int index = inputs.indexOf ((void*) input);

    if (index >= 0)
    {
        inputsToDelete.shiftBits (index, 1);
        inputs.remove (index);
    }

    lock.exit();

    if (index >= 0)
    {
        input->releaseResources();

        if (deleteInput)
            delete input;
    }
}

}

void InputCachingMixerAudioSource::removeAllInputs()
{
lock.enter();
VoidArray inputsCopy (inputs);
BitArray inputsToDeleteCopy (inputsToDelete);
inputs.clear();
lock.exit();

for (int i = inputsCopy.size(); --i >= 0;)
    if (inputsToDeleteCopy[i])
        delete (AudioSource*) inputsCopy[i];

}

void InputCachingMixerAudioSource::prepareToPlay (int samplesPerBlockExpected, double sampleRate)
{
tempBuffer.setSize (2, samplesPerBlockExpected);
cacheBuffer.setSize (2, samplesPerBlockExpected);

const ScopedLock sl (lock);

currentSampleRate = sampleRate;
bufferSizeExpected = samplesPerBlockExpected;

for (int i = inputs.size(); --i >= 0;)
    ((AudioSource*) inputs.getUnchecked(i))->prepareToPlay (samplesPerBlockExpected,
                                                            sampleRate);

}

void InputCachingMixerAudioSource::releaseResources()
{
const ScopedLock sl (lock);

for (int i = inputs.size(); --i >= 0;)
    ((AudioSource*) inputs.getUnchecked(i))->releaseResources();

tempBuffer.setSize (2, 0);
cacheBuffer.setSize (2, 0);

currentSampleRate = 0;
bufferSizeExpected = 0;

}

void InputCachingMixerAudioSource::getNextAudioBlock (const AudioSourceChannelInfo& info)
{
const ScopedLock sl (lock);

// if mixer has inputs, call their getNextAudioBlock()
if (inputs.size() > 0)	// mixer has at least one source attached
{
	// store the info.buffer so all audioSources get the inputData. cacheBuffer's size will automatically be changed to that of info.buffer by [=]
	cacheBuffer = *info.buffer;

    ((AudioSource*) inputs.getUnchecked(0))->getNextAudioBlock (info);

    if (inputs.size() > 1)	// mixer has more than one source attached
    {
        tempBuffer.setSize (jmax (1, info.buffer->getNumChannels()),
                            info.buffer->getNumSamples());
		
        AudioSourceChannelInfo info2;
        info2.buffer = &tempBuffer;
        info2.numSamples = info.numSamples;
        info2.startSample = 0;

        for (int i = 1; i < inputs.size(); ++i)
        {
            ((AudioSource*) inputs.getUnchecked(i))->getNextAudioBlock (info2);

            for (int chan = 0; chan < info.buffer->getNumChannels(); ++chan)
                info.buffer->addFrom (chan, info.startSample, tempBuffer, chan, 0, info.numSamples);

			tempBuffer = cacheBuffer;
        }
    }
}
// mixer has no inputs, so returns an empty buffer to the AudioSourcePlayer
else
{
    info.clearActiveBufferRegion();
}

}[/code]