How best to read multi-mono wave files into a single audio source

Hello all. I’ve been working with Juce for a few months now, but so far have relied upon the AudioFormatReaderSource to handle standard mono and/or stereo wave files. I am about to dive into a project where I’ll need to load a set of six mono wave files that represent discrete audio channels in a multi-mic recording and play them back from a single transport source as if they were one multi-channel file. Ultimately I will need mixing capability as well, but for now I just want to build the source. Any advice on how best to approach this would be greatly appreciated.

P.S.

I’m guessing there is a better way than creating six separate sources and trying to play them back independently in sync with each other…

Not really, but you might want to consider using AudioFormatReaders directly in your AudioSource subclass instead of going through additional AudioSources.

There should not be anything particular about syncing them. With AudioFormatReaders (and most of the AudioSource classes in JUCE) you just tell them where in the file to get the audio from and they will predictably do it, at least with file formats like WAV.

1 Like

So you’re saying I can inherit from AudioSource and attach multiple AudioFormatReaders to it? That sounds like a good approach. I’ll give it a go. Thanks for the suggestion!

All right. So I made an attempt at a simple extension to my existing audio player, adding an additional five sources for the other mono files in the set. I attached all six positionable sources to a MixerAudioSource and then control them through the transport. I hear what I believe to be all six files playing together, but there is some odd behavior. Specifically, 1) the sources start playing before the play button is triggered, and 2) the GUI becomes very sluggish until the audio files stop playing.

I think the problem lies in my custom loadAudioAssets() method. I have pasted a simplified example of the relevant bits of the (AudioApp) Component that I’m using to load and play the files. If anybody sees an obvious cause of my problems, please do point out what I can/should/must do differently here.

As always, I greatly appreciate any advice and insights.

void loadAudioAssets(std::vector<File> &audioFileSet)
{
	int counter = 0;

	for (auto audioFileToLoad : audioFileSet)
	{
		auto* reader = formatManager.createReaderFor(audioFileToLoad);

		if (reader == nullptr)
			return;

		std::unique_ptr<AudioFormatReaderSource> newSource(new AudioFormatReaderSource(reader, true));

		switch (counter)
		{
		case 0:
			transportSource.setSource(newSource.get(), 0, nullptr, reader->sampleRate);
			mixerSource.addInputSource(newSource.get(), false);
			readerSource.reset(newSource.release());
			break;

		case 1:
			transportSource2.setSource(newSource.get(), 0, nullptr, reader->sampleRate);
			mixerSource.addInputSource(newSource.get(), false);
			readerSource2.reset(newSource.release());
			break;

			// and so on... (six mono files in the set)

		}

		counter++;
	}
}

void prepareToPlay(int samplesPerBlockExpected, double sampleRate)
{
	mixerSource.prepareToPlay(samplesPerBlockExpected, sampleRate);
}

void getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill)
{
	if (readerSource.get() == nullptr ||
		readerSource2.get() == nullptr ||
		// ...
		readerSource6.get() == nullptr)
	{
		bufferToFill.clearActiveBufferRegion();
		return;
	}

	mixerSource.getNextAudioBlock(bufferToFill);
}

void releaseResources()
{
	mixerSource.releaseResources();
}

void playButtonClicked()
{
	transportSource.setPosition(0);
	transportSource2.setPosition(0);
	// ...

	transportSource.start();
	transportSource2.start();
	// ...
}

void stopButtonClicked()
{
	transportSource.stop();
	transportSource2.stop();
	// ...
}

Can’t you simply use an external tool to convert the files into 6 channel Wave files and load these into your project?
Or: load the files, convert them in your project to a (temp) 6 track file and do the playback on that single file?

1 Like

The audio pipeline works pulling samples from the end, which is the AudioIODevice. Here is defined, how many channels are produced. If you use then the AudioSourcePlayer, which is an AudioIODeviceCallback, it will call getNextAudioBlock() with an AudioBuffer having space for the requested amount of channels.

All the AudioSources, that are chained into each other, are agnostic to the number of channels, i.e. if the channels don’t match, each getNextAudioBlock() tries to do something to remedy that, sometimes mixing the channels that are too many, or just skipping.

Once you run your AudioSourcePlayer with n channels, it will try to get n channels from the previous source, which means, one AudioTransportSource is enough to control all channels, no array needed. The MixerAudioSource is also counterproductive, since it will try to mix channels of the sources, not multiplex them. ChannelRemappingAudioSource comes closest, but it works with one input source only.

It is quite simple, to create a MultiChannelAudioSource, like @Xenakios suggested. The magic happens in the getNextAudioBlock().

I’ll write down a version I believe should work, untested:

class MultiChannelAudioSource : public PositionableAudioSource
{
public:
    MultiChannelAudioSource() = default;

    void loadAudioAssets(std::vector<File> &audioFileSet)
    {
        for (auto audioFileToLoad : audioFileSet)
        {
            if (auto* reader = formatManager.createReaderFor (audioFileToLoad))
            {
                inputReaders.add (new AudioFormatReaderSource (reader, true));
            }
            else 
            {
                jassertfalse;
            }
        }
    }

    int64 getNextReadPosition() override
    {
        if (inputReaders.isEmpty()) return 0;

        return inputReaders.getUnchecked (0)->getNextReadPosition();
    }

    void setNextReadPosition (int64 newPosition) override
    {
        for (auto* reader : inputReaders)
             reader->setNextReadPosition (newPosition);
    }

    void prepareToPlay (int samplesPerBlockExpected, double sampleRate) override
    {
        for (auto* reader : inputReaders)
             reader->prepareToPlay (samplesPerBlockExpected, sampleRate);
    }

    void releaseResources() override
    {
        for (auto* reader : inputReaders)
             reader->releaseResources();
    }

    void setLooping (bool shouldLoop) override
    {
        for (auto* reader : inputReaders)
             reader->setLooping (shouldLoop);
    }

    bool isLooping() const override
    {
        if (inputReaders.isEmpty)
            inputReaders.getUnchecked (0)->isLooping();

        return false;
    }

    void getNextAudioBlock (AudioSourceChannelInfo& bufferToFill) override
    {
        jassert (inputReaders.size() >= bufferToFill.buffer->getNumChannels());

        for (int i=0; i < bufferToFill.buffer->getNumChannels(); ++i)
        {
            AudioBuffer proxyBuffer (&bufferToFill.buffer->getWritePointer (i), 1, bufferToFill.buffer->getNumSamples());
            AudioSourceChannelInfo proxyInfo (&proxyBuffer, bufferToFill.startSample, bufferToFill.numSamples);
            inputReaders.getUnchecked (i)->getNextAudioBlock (proxyInfo);
        }
    }

private:
    OwnedArray<AudioFormatReaderSource> inputReaders;
    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MultiChannelAudioSource)
};

Good luck

3 Likes

Thanks @daniel for the detailed explanation and explicit code example! This seems to be exactly what I was hoping for, a single source with multiple channels. I’ll give it a try and let you know how it goes. Much appreciated!

Thanks also to @peter-samplicity for your reply. I will probably want to do that eventually, streaming the multi-mono file set into a multi-channel wave file format for storage and future re-use. That thought had occurred to me early on, but I have hundreds of thousands of (edited) multi-channel samples to contend with and would need to QA (or at least spot-check) the replacement files. At the moment, there are more pressing matters to attend to.

Again, I greatly appreciate everyone’s input and advice!

1 Like

@daniel : I’m having trouble with one of the lines of code in your example.

AudioBuffer<float> proxyBuffer(&bufferToFill.buffer->getWritePointer(i), 1, bufferToFill.buffer->getNumSamples());

I’ve been trying for the past hour to research/correct it on my own, but so far am at a loss as to why it’s not working. My only modification was to add <float> to AudioBuffer, which seemed to be required.

As written, I get compiler error “expression must be an lvalue”.

If I remove the & before bufferToFill, or change it to *, I get compiler error “no instance of constructor matches the argument list”.

UPDATE

Actually, I am able to clear the compiler errors by modifying the code so it looks like this:

auto writePointer = bufferToFill.buffer->getWritePointer(i);
AudioBuffer<float> proxyBuffer(&writePointer, 1, bufferToFill.buffer->getNumSamples());

I have yet to put it to an actual use test, but will update the thread accordingly once I have.

Sorry, was not online…
Yes, well done. That was exactly the right thing to do, put the float pointer into a variable, where you can pull the address of. I was wondering, if all in brackets would be needed for the address operator, but your solution is the most legible.

Hope it does, what it should now :slight_smile:

1 Like

As a matter of fact, I just finished testing the fully built class. So far, so good. It loads and plays correctly, though I’ve noticed the load time is much slower than with the original AudioTransportSource object I was using.

I’ve limited the input to two channels for now because the audio device I’m currently testing with only has two channel output, and I saw that you had built in an assertion that gets triggered when the file vector size exceeds the number of channels. So I still need to downmix to stereo (via MixerAudioSource I presume) or test with a multichannel output device.

I did run into a snag when trying to use the loadAudioAssets() method a second time to load in a different set of audio files. I’m guessing I need to release the resources first. Or just create a new instance for each set of files I guess.

At any rate, I’m still working through these issues. If I get stuck, I’ll post specific questions with explicit code examples.

Cheers.

There are several possible explanations for that, the most likely is, that the AudioTransportSource uses a BufferingAudioSource internally.
You can get the same behaviour by feeding the AudioFormatReaderSource through a BufferingAudioSource or using an BufferingAudioReader. This can also be used to spread the reading over several cores.

This can be done by feeding the MultiChannelAudioSource through a ChannelRemappingAudioSource, where you can define, which channel is summed to what channel in the output.

Probable just calling inputSources.clear(); at the beginning of loadAudioAssets should be enough. Theoretically calling releaseResources() before, but usually the sources release their resources in the destructor anyway.

1 Like

Thanks for all the helpful hints! I did actually wind up discovering inputSources.clear() which let me load a new source correctly, as you predicted. I’m still chasing down other odd behavior, though, such as the inability to play from the start of the file after it reached EOF, despite setting the position back to 0 after stop() and before start(). I’ll look into the ChannelRemappingAudioSource and BufferingAudioSource to address utility and performance issues.

I’ve finished chasing down the bugs, most of which had to do with my calling program. I’ve pasted the complete (working) class below for anyone who is interested in seeing the final result. Much of the code is borrowed from AudioTransportSource, augmented with Daniel’s example code from the earlier post. All methods were tested except for setLooping() and isLooping().

#pragma once
#include "../JuceLibraryCode/JuceHeader.h"

class MultiChannelAudioSource : public PositionableAudioSource,
				public ChangeBroadcaster

{
public:
	//==============================================================================
	/* Constructor */
	MultiChannelAudioSource() = default;

	/* Destructor */
	~MultiChannelAudioSource() { releaseResources(); };

	//==============================================================================
	/* Creates a set of readers for the multi-mono source data */
	void loadAudioAssets(const std::vector<File> &audioFileSet)
	{
		if (isRegistered == false)
		{
			formatManager.registerBasicFormats();
			isRegistered = true;
		}

		// This deletes any readers created by a previous call to loadAudioAssets()
		// and clears the array for a fresh load of the incoming audioFileSet
		releaseResources();

		for (auto audioFileToLoad : audioFileSet)
		{
			if (auto* reader = formatManager.createReaderFor(audioFileToLoad))
				inputReaders.add(new AudioFormatReaderSource(reader, true));

			else jassertfalse;
		}
	}

	//==============================================================================
	/* setPosition */
	void setPosition(double newPosition)
	{
		if (sampleRate > 0.0)
			setNextReadPosition((int64)(newPosition * sampleRate));
	}

	/* getCurrentPosition */
	double getCurrentPosition() const
	{
		if (this->sampleRate > 0.0)
			return (double)getNextReadPosition() / this->sampleRate;

		return 0.0;
	}

	/* getLengthInSeconds */
	double getLengthInSeconds() const
	{
		if (sampleRate > 0.0)
			return (double)getTotalLength() / sampleRate;

		return 0.0;
	};

	/* hasStreamFinished */
	bool hasStreamFinished() const noexcept { return inputStreamEOF; }

	//==============================================================================
	/* start */
	void start()
	{
		if ((!playing) && inputReaders.getUnchecked(0) != nullptr)
		{
			{
				const ScopedLock sl(callbackLock);
				playing = true;
				stopped = false;
				inputStreamEOF = false;
			}

			sendChangeMessage();
		}
	}

	/* stop */
	void stop()
	{
		if (playing)
		{
			{
				const ScopedLock sl(callbackLock);
				playing = false;
			}

			int n = 500;
			while (--n >= 0 && !stopped)
				Thread::sleep(2);

			sendChangeMessage();
		}
	}

	/* isPlaying */
	bool isPlaying() const noexcept { return playing; }

	//==============================================================================
	/* setGain */
	void setGain(float newGain) noexcept { currentGain = newGain; };

	/* getGain */
	float getGain() const noexcept { return currentGain; }

	//==============================================================================
	/* prepareToPlay */
	void prepareToPlay(int samplesPerBlockExpected, double sampleRate) override
	{
		const ScopedLock sl(callbackLock);

		this->blockSize = samplesPerBlockExpected;
		this->sampleRate = sampleRate;

		for (auto* reader : inputReaders)
			reader->prepareToPlay(samplesPerBlockExpected, sampleRate);

		inputStreamEOF = false;
		isPrepared = true;
	}

	/* releaseResources */
	void releaseResources() override
	{
		const ScopedLock sl(callbackLock);

		for (auto* reader : inputReaders)
			reader->releaseResources();

		// Clear the array and delete the readers
		inputReaders.clear(true);

		isPrepared = false;
	}

	/* getNextAudioBlock */
	void getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill) override
	{
		const ScopedLock sl(callbackLock);

		jassert(inputReaders.size() >= bufferToFill.buffer->getNumChannels());

		if (stopped == false)
		{
			for (int i = 0; i < bufferToFill.buffer->getNumChannels(); ++i)
			{
				auto writePointer = bufferToFill.buffer->getWritePointer(i);
				AudioBuffer<float> proxyBuffer(&writePointer, 1, bufferToFill.buffer->getNumSamples());
				AudioSourceChannelInfo proxyInfo(&proxyBuffer, bufferToFill.startSample, bufferToFill.numSamples);
				inputReaders.getUnchecked(i)->getNextAudioBlock(proxyInfo);
			}

			if (playing == false)
			{
				// just stopped playing, so fade out the last block..
				for (int i = bufferToFill.buffer->getNumChannels(); --i >= 0;)
					bufferToFill.buffer->applyGainRamp(i, bufferToFill.startSample, jmin(256, bufferToFill.numSamples), 1.0f, 0.0f);

				if (bufferToFill.numSamples > 256)
					bufferToFill.buffer->clear(bufferToFill.startSample + 256, bufferToFill.numSamples - 256);
			}

			if (inputReaders.getUnchecked(0)->getNextReadPosition() > inputReaders.getUnchecked(0)->getTotalLength() + 1
				&& inputReaders.getUnchecked(0)->isLooping() == false)
			{
				playing = false;
				inputStreamEOF = true;
				sendChangeMessage();
			}

			stopped = !playing;

			for (int i = bufferToFill.buffer->getNumChannels(); --i >= 0;)
				bufferToFill.buffer->applyGainRamp(i, bufferToFill.startSample, bufferToFill.numSamples, previousGain, currentGain);
		}

		else
		{
			bufferToFill.clearActiveBufferRegion();
			stopped = true;
		}

		previousGain = currentGain;
	}

	//==============================================================================
	/* setNextReadPosition */
	void setNextReadPosition(int64 newPosition) override
	{
		for (auto* reader : inputReaders)
			reader->setNextReadPosition(newPosition);

		inputStreamEOF = false;
	}

	/* getNextReadPosition */
	int64 getNextReadPosition() const override
	{
		if (inputReaders.isEmpty())
			return 0;

		return inputReaders.getUnchecked(0)->getNextReadPosition();
	}
	
	/* getTotalLength */
	int64 getTotalLength() const override
	{
		const ScopedLock sl(callbackLock);

		if (inputReaders.isEmpty())
			return 0;

		return inputReaders.getUnchecked(0)->getTotalLength();
	}

	//==============================================================================
	/* setLooping */
	void setLooping(bool shouldLoop) override
	{
		const ScopedLock sl(callbackLock);

		for (auto* reader : inputReaders)
			reader->setLooping(shouldLoop);
	}

	/* isLooping */
	bool isLooping() const override
	{
		const ScopedLock sl(callbackLock);

		if (inputReaders.isEmpty())
			return false;

		return inputReaders.getUnchecked(0)->isLooping();
	}

private:
	//==============================================================================
	AudioFormatManager formatManager;
	OwnedArray<AudioFormatReaderSource> inputReaders;

	//==============================================================================
	CriticalSection callbackLock;

	//==============================================================================
	bool isRegistered = false;
	bool isPrepared = false, inputStreamEOF = false;
	bool playing = false, stopped = true;
	float currentGain = 1.0f, previousGain = 1.0f;
	double blockSize, sampleRate;

	//==============================================================================
	JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR(MultiChannelAudioSource)
};

Many many thanks to @daniel for all his help with this!

This has been a really useful thread for me as a JUCE (and C++) beginner. I’ve got plenty of years experience in Pascal (Delphi) and Python but always avoided C++!

The example is great but I have a file picker and I want to pass the file to &audioSet, I’m guessing that audioset is an array of files? After instantiating I cant seem to get the right type of object/pointer to pass to loadAudioAssets.

So I’m using a file chooser and storing the file as:

auto file = chooser.getResult()

and I want to pass this to loadAudioAssets, not sure what to do!

Many thanks

Gary

Hi Gary. The loadAudioAssets is looking for a std::vector of juce::File to supply the multi-mono audio file set. So if you are just picking a single file, you should push it into a std::vector and then pass that vector to the loadAudioAssets method. It’s okay to have a vector of one element.

To make this class more abstract and usable I’d allow adding AudioFormatReaders instead of actual Files.

Few things to point out:

  • samplerate conversion - there is no assertion/check for samplerate. If I remember correctly the positional audiosource has JUCE hardcore src, but… if you’ll have multiple files with different samplerate (eg. 48khz and 44.1khz) you’ll end up with something that might be unexpected from a user point of view.

  • caching - JUCE got some really nice wrappers for MemoryMapped and Caching your audio. those could greatly improve performance with tons of audio files.

2 Likes

Thanks for pointing me in the right direction asi.

Ok so now I have:

        auto file = chooser.getResult();
        std::vector<File> audioFile;
        
        audioFile.push_back(file);
        mcas.loadAudioAssets(audioFile);
        mcas.start();

I can debug through the code and see that it gets the file in the loadAudioAssets and runs through the code in start but I get no output. I’m assuming its because its not calling getNextAudioBlock but thats a callback function?

Like I say C++ novice here!

Thanks

Gary

I’m away from the computer and it’s been a long time since I implemented the original code. However, I think you will discover you’re missing some needed infrastructure to get your source’s outputs to the audio device. Take a long look at the JUCE demos, perhaps using the audio file player demo as a starting point. Also check out the tutorials if you haven’t already done so. I found them to be very helpful when I first started working with JUCE.

Have a look at my first answer above, there I summarised a bit about the audio pipeline.

If you are implementing an App, there is the AudioIODevice, that is calling the AudioIODeviceCallback regularily.
In case of the AudioAppComponent, that already has a AudioSourcePlayer and an AudioDeviceManager aggregated.
It is started by calling setAudioChannels(). And the AudioAppComponent is the AudioSource, that the AudioSourcePlayer is playing, so the getNextAudioBlock() will start automatically.

Hope that helps

1 Like