ChannelRemapping AudioSource - prepareToPlay not called - Intentions?

Hey everybody,

I’m using in my AudioApp, not a plugin, a ChannelRemappingAudioSource. I face an issue that my output always clips when I play a 0dB 1kHz Sine Tone, which should not be clipping. I’m struggeling to find the issue in my code and I stumbled uppon my PrepareToPlay Method in my custom AudioChannel Class - it’s called slot:

void Slot::prepareToPlay(int samplesPerBlockExpected, double sampleRate)
{
   DBG(">>>>>>> I'm in prepareToPlay in SLOT <<<<<");
   transportSource_.addChangeListener(this);
   transportSource_.prepareToPlay(samplesPerBlockExpected, sampleRate);
   outputMeterSource_.resize(2, sampleRate * 0.05 / samplesPerBlockExpected);
   DBG("This is TransportSource Gain: " << transportSource_.getGain());
}

The Slot inherits from AudioSource and can be attached to the ChannelRemappingAudioSource:

#pragma once
#include <JuceHeader.h>
#include "AudioRecorder.h"

/*
  ==============================================================================
	Class for handling recording and playback. It creates a Transportsource for
	playback and creates a custom audiorecorder for recording.
  ==============================================================================
*/

class Slot : public juce::AudioSource,
	public juce::ValueTree::Listener,
	public juce::ChangeListener,
	private juce::Timer
{
public:
	Slot(juce::AudioDeviceManager& deviceManager, juce::ValueTree& vT, int channelNumber);
	~Slot() override;

	void changeListenerCallback(juce::ChangeBroadcaster* source) override;
	void prepareToPlay(int samplesPerBlockExpected, double sampleRate) override;
	void getNextAudioBlock(const juce::AudioSourceChannelInfo& bufferToFill) override;
	void releaseResources() override;
	void valueTreePropertyChanged(juce::ValueTree& treeWhosePropertyHasChanged, const juce::Identifier& property) override;
	void valueTreeChildAdded(juce::ValueTree&, juce::ValueTree&) override {};
	void valueTreeChildRemoved(juce::ValueTree& parentTree, juce::ValueTree& childWhichHasBeenRemoved, int indexFromWhichChildWasRemoved) override;
	void valueTreeChildOrderChanged(juce::ValueTree&, int, int) override {};
	void valueTreeParentChanged(juce::ValueTree&) override {};
	void valueTreeRedirected(juce::ValueTree&) override {};
	void timerCallback() override;
	foleys::LevelMeterSource& getInputMeterSource() { return audioRecorder_->getLevelMeterSource(); }
	foleys::LevelMeterSource& getOutputMeterSource() { return outputMeterSource_; }
	

private:
	//Utility Members
	juce::CriticalSection slotLock;
	std::unique_ptr<juce::AudioFormatReaderSource> readerSource;
	juce::AudioFormatManager formatManager;
	juce::AudioDeviceManager& deviceManager_;
	
	//File Handling
	const juce::File dir_;
	std::unique_ptr<juce::File> file_;
	juce::String currentFileName_;
	
	//Playback and Recording
	juce::AudioTransportSource transportSource_;
	std::unique_ptr<AudioRecorder> audioRecorder_;
	
	//Utility Members
	int channelNumber_;

	juce::ValueTree channelNode_;

	//Utility Functions
	void setNextSource(juce::File& file);

	//LVL Meter
	foleys::LevelMeterSource inputMeterSource_;
	foleys::LevelMeterSource outputMeterSource_;
	
	//Properties for Key - Value Pair recodgnition
	static const juce::Identifier propertyState;
	static const juce::Identifier propertyChannel;
	static const juce::Identifier propertyStereo;
	static const juce::Identifier propertyInput;
	static const juce::Identifier propertyOutput;
	static const juce::Identifier propertyInputGain;
	static const juce::Identifier propertyOutputGain;
	static const juce::Identifier propertyTransportTime;
};

In my MainComponent I attach the Slot to the ChannelRemapping AudioSource, whenever the User decides to create another Slot(Audiochannel):
The crasArray_ is an ownedArray of C hannel R emapping A udio S ources.

void MainComponent::valueTreeChildAdded(juce::ValueTree& parentTree, juce::ValueTree& childWhichHasBeenAdded)
{
	if (childWhichHasBeenAdded == vT_Channel_)
	{
		auto slot = new Slot(otherDeviceManager, vT_Channel_, crasArray_.size());
		SlotComponent* slotComponent = new SlotComponent(otherDeviceManager, vT_Channel_, slotComponents_.size(), slot->getInputMeterSource(), slot->getOutputMeterSource());
		addAndMakeVisible(slotComponent);
		slotComponents_.isEmpty() ? slotComponent->setBounds(0, 0, slotWidth_, slotHeight_) : slotComponent->setBounds(slotComponents_.getLast()->getRight(), 0, slotWidth_, slotHeight_);
		slotComponents_.add(slotComponent);
		crasArray_.add(new juce::ChannelRemappingAudioSource(slot, true));
		if (!slotComponents_.isEmpty())
			slotComponents_.getLast()->setStereoAvailable(false);
		vT_Channel_.setProperty(propertyNumber, crasArray_.size(), nullptr);

		for (int i = 0; i < vT_ChannelList_.getChild(0).getNumChildren(); ++i)
		{
			bool selectable = vT_ChannelList_.getChild(0).getChild(i).getPropertyAsValue(propertyIsSelectable, nullptr).getValue();
			if (!slotComponents_.isEmpty())
				slotComponents_.getLast()->setInputChannelSelectable(i, selectable);
		}

		for (int j = 0; j < vT_ChannelList_.getChild(1).getNumChildren(); ++j)
		{
			bool selectable = vT_ChannelList_.getChild(1).getChild(j).getPropertyAsValue(propertyIsSelectable, nullptr).getValue();
			if (!slotComponents_.isEmpty())
				slotComponents_.getLast()->setOutputChannelSelectable(j, selectable);
		}

		if (otherDeviceManager.getCurrentAudioDevice() != nullptr)
		{
			prepareToPlay(otherDeviceManager.getCurrentAudioDevice()->getCurrentBufferSizeSamples(), otherDeviceManager.getCurrentAudioDevice()->getCurrentSampleRate());
		}
		resized();
	}
}

And here are the prepareToPlay and getNextAudioBlock Methods of my Main:

void MainComponent::prepareToPlay(int samplesPerBlockExpected, double sampleRate)
{
	if (!crasArray_.isEmpty())
	{
		for (juce::ChannelRemappingAudioSource* cras : crasArray_)
		{
			cras->prepareToPlay(samplesPerBlockExpected, sampleRate);
		}
	}
}

void MainComponent::getNextAudioBlock(const juce::AudioSourceChannelInfo& bufferToFill)
{
	//Fill Buffer with Audio
	for (int i = 0; i < crasArray_.size(); ++i)
	{
		juce::AudioSourceChannelInfo slotBufferInfo;
		copyBuffer.reset(new juce::AudioBuffer<float>(bufferToFill.buffer->getNumChannels(), bufferToFill.numSamples));
		copyBuffer->clear();
		inBuffer.reset(new juce::AudioBuffer<float>(bufferToFill.buffer->getArrayOfWritePointers(), bufferToFill.buffer->getNumChannels(), bufferToFill.buffer->getNumSamples()));
		slotBufferInfo.buffer = copyBuffer.get();
		slotBufferInfo.startSample = 0;
		slotBufferInfo.numSamples = bufferToFill.numSamples;
		crasArray_[i]->getNextAudioBlock(slotBufferInfo);
		rmsLvlLeft_ = juce::Decibels::decibelsToGain(slotBufferInfo.buffer->getRMSLevel(0, slotBufferInfo.startSample, slotBufferInfo.numSamples));
		rmsLvlRight_ = juce::Decibels::decibelsToGain(slotBufferInfo.buffer->getRMSLevel(1, slotBufferInfo.startSample, slotBufferInfo.numSamples));
		for (int channel = 0; channel < bufferToFill.buffer->getNumChannels(); ++channel)
		{
			bufferToFill.buffer->addFrom(channel, bufferToFill.startSample, *slotBufferInfo.buffer, channel, 0, slotBufferInfo.numSamples);
			outBuffer.reset(new juce::AudioBuffer<float>(bufferToFill.buffer->getArrayOfWritePointers(), bufferToFill.buffer->getNumChannels(), bufferToFill.buffer->getNumSamples()));
		}
	}
}

I thought the ChannelRemappingAudioSource is forwarding the prepareToPlay Method to my attached Slots, so that it’s beeing called. I use it already for some time and never thought about checking if it really preparesToPlay. It is working like this and was before but I thought AudioSources need to be prepared to work? Or is being prepared but the prepareToPlay is not being called but it’s prepared because the ChannelRemappingSource is prepared? Can somebody explain to me, why it is working like this without calling the prepareToPlay of my Slot?

That brings me also to my next question, where I still can’t find out an answer: Is the ChannelRemappingSource compensating stereo/mono ? I have a strange reaction of my App at the moment. When I started the app and add a slot, route the slot to in and outputs of my interface and load a 1kHz Sine Wave at 0 dB with 192 kHz Samplerate, it always clipps my output. I can’t really see an exact amount, since my fireface only reports OVL without a value but I asume it is exactly 6 dB. So the first time it doesn’t compensate, I guess. Then if I make it stereo, I have a feature to expand the channelremappingaudiosource to stereo, the level is set perfectly, My Fireface reports exactly 0 dB and that is what it reports when I open the Sinefile with the Microsoft Mediaplayer. When I revert it back to mono, the level fits again. It’s not clipping my output anymore.
In the beginning I thought, great!, I just need to take a look what happens when I hit the stereo button in the app and need to apply that to initialisation of my app, to have a correct level from begining on. But turns out, after searching now for days, I can’t find anything what is different to the initalisation. So I’m digging deeper now at the moment. And turn out also the more I hit my app, the more I find which is unneccesary code. Of course I always test right after each change, but the fact that a mono output always clips after initialisation, didn’t disappear.
While searching and debugging I stumbled upon the fact the the prepareToPlay method is not being called and I’m wondering now why? Is it intended that I call this manually?

When I was checking out the Docs again for the ChannelRemappingAudioSource, I saw that actually the imortant methods(PrepareToPlay, getNextAudioBlock… ) are actually virtual. That made me think of, should I make my own class which inherits from ChannelRemappingAudioSource? Is that the reason why it doesn’t call the prepareToPlay method? Actually while thinking about it, if my Slot would inherit from ChannelRemapping Source, i could spare some lines in my MainComponent. On the other hand I’m a bit feared now to change my app logic there. It was working fine, except of the level issue I face right now and I can’t really say that this issue will disapear if I change the logic. But generally asked, how is the ChannelRemappingAudioSource designed to use? Should I inherit from it or just use the juce code and call just the juce class?
In general I realise now that I’m still struggling understanding in JUCE when you want me to inherit from a class and when not. I come from Java and with the clear Interface mark, it makes it clear if I have to inherit. On the other hand the c++ approach let’s you more decide. So I think it dawns on me to this one question. If a Juce Class has virtual Methods, do I have to inherit them then?

I hope somebody can help me out. I slowly run out of ideas whats causing my issue, I hope I’m at the right spot.

When you add an AudioSource as source to an existing AudioSource, most likely the prepareToPlay for the existing AudioSource most liklely already happened. That’s why the prepareToPlay of your main couldn’t propagate.

What I usually do is to call prepareToPlay on the new AudioSource before adding it as source.

Ideally you should only inherit juce::AudioSource and juce::PositionableAudioSource. Those are meant as interface.
The other AudioSources should be considered final and if you must inherit them, pay close attention how you change the behaviour when overriding the virtual functions. In C++ the methods of the base class are not called automatically.

Awesome @daniel :), that made the inheritance clear to me and also the way prepareToPlay works with AudioSources. Since I call prepareToPlay before adding it to the CRAS, prepareToPlay in my AudioSource is called.

The clipping is still there but I think it has something to do with how I initialize a mono channel. My CRAS and the Slot and the approach I follow looks to me correct, so I think the Issue is somewhere in the details.