AudioTransportSource::getNextAudioBlock() reading 8 channel file yields a buffer with only 6 channels

Since this question cam up earlier, and actually I was missing something like this earlier, I wrote an example here on github, but I haven’t tested it yet:

Thanks.

And apologies for being dumb here, but I’m not sure what you’re suggesting I do with that? Use it instead of the AudioApp Component? Or the TransportSource? etc

Yes, I am about to write that part too… for now just look at the MultiChannelAudioSource class.

It is an AudioSource, so you can set it as input to your AudioTransportSource. Many of the AudioSources take itself an AudioSource, so they form a processing chain.

The difference here is, that we dictate the number of channels for the read buffer ourself by setting the buffer’s number of channels in setSource.
This example always assumes 2 channels output so far.

And I added a simple matrix as an example that distributes the channels, like the old backward compatible analog systems worked.

While your class could be useful for other situations, I think it doesn’t necessarily provide any features needed by illagarr. He just needs to be able to read in the source file with all its channels and pass the 8 channels as they are into some binaural downmixer audio processor.

I think it would be enough for him to add an AudioBuffer instance to his AudioAppComponent subclass, initialize it in prepareToPlay and use it with his transportSource instance in getNextAudioBlock.

While your class could be useful for other situations, I think it doesn’t necessarily provide any features needed by illagarr. He just needs to be able to read in the source file with all its channels and pass the 8 channels as they are into some binaural downmixer audio processor.

Yes, to the latter part.

You don’t really need a custom AudioSource, just an AudioBuffer instance of your own that can hold the amount of channels you need to read from the file.

If you really wanted to, you could do a custom AudioSource that incorporates the binaural downmixing processing, but there’s no technical reason you would be required to do that. (It could keep things cleaner for future expansion, for example if you need to use several source audio files at the same time.)

Well yes, I just wrote an example downmixer. Instead of the basic matrix, the OP can do his own, probably more elaborate binaural mix algorithm. Just replace this code here:

It was meant as an example…
Btw, this works now, I fixed the bugs

Right. Obviously also if the need to use several source files ever appears, having the binaural processing already in a separate class would be quite handy! :smiley:

“I don’t want to save the world, I’m not looking for new England…”

I have the binaural downmixing in a separate class, which is why I’d like to get the 8 channels of buffer into my processBlock.

So, if If I just wanted to take a simpler approach and implement my own audioBuffer, where does this AudioBuffer get populated?

That’s a source of confusion for me. I know how to add a buffer, set it’s size etc, but I’m stuck with how/where that buffer can be populated or where it can can relace the buffer that being passed to getNextAduioBlock.

The buffer that is given to the AudioAppComponent::getNextAudioBlock is not your’s to change, it is whatever it ends up as due to the the audio hardware and Juce’s AudioDeviceManager. You give your own AudioBuffer instance for your transportSource to populate. (You need to go via an AudioSourceChannelInfo instance though, because AudioSources don’t work with AudioBuffers directly.) You then do your binaural processing with your own buffer instance and finally copy the processed audio into the buffer given to AudioAppComponent::getNextAudioBlock. You might want to zero out any channels above 2, since I guess the final output should be only stereo?

Yes, I get that.

yes, I’m doing that currently.

How?

Something like (not checked for correctness, I wrote this directly in the browser) :


// myWorkBuffer would be your AudioBuffer instance as a member variable of your AudioAppComponent subclass, initialized big enough with enough channels elsewhere

AudioSourceChannelInfo myWorkBufferInfo(&myWorkBuffer,bufferToFill.startSample,bufferToFill.numSamples);
transportSource.getNextAudioBlock(myWorkBufferInfo);

That’s exactly what I showed in my example…

I can sort that.

I would do this before I call transportSource.start() fucntion?

No, that code goes into your component’s getNextAudioBlock…

So best of class or minimal example now?

SCNR

I think I have to write the minimal example project, this thread is getting way too long and confusing. :grimacing: (I don’t have a binaural processing thing handy but I guess I can mock it up somehow…)

This must be exasperating for you guys! I’m sorry about that.

So, I added an AudioBuffer AudioBuffer<float> bufferIn8chan;

In prepareToPlay, I set it to 8 channels bufferIn8chan.setSize(8, samplesPerBlockExpected);

as per the example
AudioSourceChannelInfo myWorkBufferInfo(&bufferIn8chan, bufferToFill.startSample, bufferToFill.numSamples); transportSource.getNextAudioBlock(myWorkBufferInfo);

Give’s me an 8 channel buffer.

Thanks and appreciation to you both.

That’s great, congrats :slight_smile: