AudioTransportSource::getNextAudioBlock() reading 8 channel file yields a buffer with only 6 channels

I’m hoping to use an AudioTransportSource to read a 8 channel interleaved .wav file

but the buffer it offers up in getNextAudioBlock() only has 6 channels!

How do I get the buffer to have all 8 channels?

reader/transport code

	auto* reader = formatManager.createReaderFor(File(start.filename));

	if (reader != nullptr)
	{
		
		std::unique_ptr<AudioFormatReaderSource> newSource(new AudioFormatReaderSource(reader, true));
		newSource->setLooping(start.isLooping);
		transportSource.setSource(newSource.get(), 0, nullptr, reader->sampleRate,8);
		transportSource.start();
		readerSource.reset(newSource.release());

	}

btw, if I look at the reader’s raw object in the debugger, I can see the file has 8 channels

this is a gui-less DLL (which works great with 2 chan file)

How does it manifest? The buffer has just 6 channels or the channels 7 and 8 are silent?

The buffer (const AudioSourceChannelInfo& bufferToFill) only has 6 channels.

How many channels does the buffer you give to transportSource.getNextAudioBlock have?

6, I’m giving it bufferToFill to

transportSource.getNextAudioBlock(bufferToFill);

Then the behavior seems as expected. The transportsource isn’t going to resize the buffer channel count. It just fills the given buffer with as many channels as that buffer has, even if the file source has more.

OK, how would I resize that buffer?

Or have I hit a limit?

You need to have your own AudioBuffer with enough channels and use the TransportSource with that. (Or whatever is giving you the buffer with just 6 channels could maybe be configured to have 8 channels…? But it’s probably best you have your own AudioBuffer anyway.)

Thanks for you help so far @Xenakios

So, at the moment, I think I’m just a using a default buffer that is created by the ‘transportSource’, would this the be the case? I presume when TransportSource.Start() is called, it set’s up the buffer that is passed into getNextAudioBlock for the first time?

If so, how would I go about giving it another buffer?

The transportSource has its internal buffer that is set up separately but when you request it to produce audio with getNextAudioBlock, it fills the buffer that is passed in. Where do you get that buffer from? What kind of a JUCE project are you using this stuff in?

This is Dynamic DLL, based on the AudioAppComponent

I’m not certain where the original buffer is coming from, my assumption was always that that the transportSource.Start() function was populating the initial buffer and calling getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill), then using transportSource.getNextAudioBlock(bufferToFill); before I’m passing into a processBlock()

void getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill)
{

	if (readerSource.get() == nullptr)
	{
		bufferToFill.clearActiveBufferRegion();
		return;
	}

	transportSource.getNextAudioBlock(bufferToFill);

	AudioBuffer<float> procBuf(bufferToFill.buffer->getArrayOfWritePointers(),
		bufferToFill.buffer->getNumChannels(),  // <- this is 6 too
		bufferToFill.startSample,
		bufferToFill.numSamples);
	MidiBuffer midi;
	processBlock(procBuf, midi);
}

When audio is pulled from a chain of AudioSources, the one closest to the output prepares the buffer, that is handed recursively through the chain. Usually this could be an AudioSourcePlayer, that gets the number of channels from the AudioIODevice it is playing to.

If you need a different number of channels, you need some kind of multiplexer, e.g. ChannelRemappingAudioSource, or call getNextAudioBlock yourself with an appropriate pre-allocated buffer.

EDIT: To clarify, the AudioAppComponent aggregates (i.e. has a member of type) AudioSourcePlayer, that pulls audio from the component itself (i.e. calls the getNextAudioBlock() method of the component). The buffer is determined by the number of channels requested, when opening the device.

See here:

The buffer that is in the AudioAppComponent::getNextAudioBlock call has nothing to do with the internal buffer of your transportSource. (The internal buffer determines how many channels at maximum it can read from the source file but does not affect how many channels it will actually output, that is determined by the buffer given to transportSource::getNextAudioBlock.) The AudioAppComponent::getNextAudioBlock buffer apparently gets initialized to have 6 channels “somewhere”, then you pass it into the transportSource and that fills it with 6 channels.

thanks both for your patience with me.

so this making more sense to me.

When I start my class, which inherits from the AudioAppCompnent, setAudioChannels(8, 2); is being called in the constructor. That calls deviceManager.initialise() which is returning an empty string, so is not erroring. I’m assuming that the device is being set with 8 channels?

Clearly it isn’t since you get a buffer with just 6 channels in the AudioAppComponent::getNextAudioBlock call. Note that the maximum channel counts that can be initialized for the AudioAppComponent depend on your audio hardware. Whether you get an error when initializing the device with incompatible channel counts or not probably depends on the audio subsystem used. (For me, for example Windows WASAPI is pretty picky about it and the requested channel counts must be supported or JUCE fails to open the device.) You should not really rely on the audio hardware IO channel counts and instead use your own AudioBuffer for the processing. At the end of the processing, you should again adapt to the audio hardware channel counts in some way, for example by remapping the channels, by omitting them or by outputting silence.

OK. Thanks again. I’m starting to understand more.

I’m a little confused as to where I introduce my own AudioBuffer.

You should add that as a member variable of your AudioAppComponent and set its number of channels and size in prepareToPlay. (You must not have a buffer like that as a local variable in your getNextAudioBlock because you would be causing memory allocations to happen which shouldn’t be done during the audio processing.)

OK, and then use it in the `transportSource.getNextAudioBlock(myNewBufferEtc);?

I was about to suggest the ChannelRemappingAudioSource again, which seems perfect for the case, but looking into the code, I actually think this class should be avoided, it allocates inside the getNextAudioBlock :scream:

So stepping back, what is it, that you need all channels for? Granted, the down mixing is not very intelligent.
At one point in the chain of AudioSources you want all channels to be present, that is the class, that I would roll my own, providing a buffer with the correct number of channels and implement my own downmix.

Let me know, if you need help with that.

I’m attempting to develop a binauraliser for 7.1 and other multichannel. So the 8 channels do need to be there.

It would be great if you could help me with rolling my own AudioSource.