When prepareToPlay() is called, and then also AudioAppComponent::getNextAudioBlock(), what would be passed to it as bufferToFill? What’s different from AudioTransportSource::getNextAudioBlock?

The prepareToPlay() method is guaranteed to be called at least once on an ‘unprepared’ source to put it into a ‘prepared’ state before any calls will be made to getNextAudioBlock(). Where in this tutorial makes the AudioBuffer fileBuffer prepared?

https://docs.juce.com/master/tutorial_looping_audio_sample_buffer.html

The fileBuffer is “prepared” in the openButtonClicked method. Often one would do such initialisations in prepareToPlay but in this code the audio buffer needs to be sized according to how many channels and samples are in the audio file that is opened. The prepareToPlay method has been left empty because there is nothing to be done there in that code.

Which line in openButtonClicked explicitly make fileBuffer prepared?

fileBuffer.setSize ((int) reader->numChannels, (int) reader->lengthInSamples); sets the size of the buffer to deal with the following reader->read call.

Then AudioFormatReaderSource read from this AudioFormatReader, but what makes AudioFormatReaderSource prepared?

I don’t see an AudioFormatReaderSource in that tutorial code you posted the link to. The tutorial shows how you can load a file into an AudioBuffer and play from that in the getNextAudioBlock method. Maybe you need to explain further what you are trying to do.

Just want to know how AudioAppComponent::getNextAudioBlock() gets its bufferToFill, why just open the file, then it gets fileBuffer? Or AudioFormatReader automatically send what it reads to AudioAppComponent::getNextAudioBlock() ?

AudioAppComponent::getNextAudioBlock() gets called by Juce/the operating system. (And the buffer you need to fill there is also managed by Juce/the operating system.)

So whenever AudioFormatReader reads something, Juce just automatically run getNextAudioBlock ()?

If you are referring to the tutorial code you posted, no, that is not how it works. The getNextAudioBlock is repeatedly called by Juce/the operating system from a different thread when the audio hardware buffer needs to be filled.

But the reader has the sample information of the file, and in this tutorial, fileBuffer didn’t get that, it just got channel size and number of samples. Then how AudioAppComponent::getNextAudioBlock () know what to be passed?

Isn’t there any tutorial about AudioAppComponent::getNextAudioBlock ()'s detailed procedure? None of the Audio section tutorial and “Build a white noise generator” really talked about this.

The fileBuffer is filled with audio from the AudioFormatReader with the read call.

Oh, sorry. So whenever whichever AudioBuffer is filled, AudioAppComponent::getNextAudioBlock () gets it passed into itself, and also “create” a thread for this AudioBuffer so that when there’s another AudioBuffer being filled, they won’t be in the same AudioAppComponent::getNextAudioBlock ()?

No, the AudioBuffers have nothing to do with each other. The fileBuffer is a separate one from the buffer that is given in the AudioAppComponent::getNextAudioBlock call. The data from the fileBuffer is just used to fill the output buffer in the getNextAudioBlock.

So basically, once a AudioBuffer is filled, Juce call AudioAppComponent::getNextAudioBlock() and pass this AudioBuffer?

When the audio hardware/operating system has a new buffer to process, it ends up calling the getNextAudioBlock method of the Juce AudioAppComponent. Again, that has nothing to do with the fileBuffer in the tutorial code. The fileBuffer is only used as the source audio data for the buffer that is delivered with the AudioAppComponent::getNextAudioBlock call.

Maybe what’s confusing you a bit is that the tutorial code shuts down the audio when the openButtonClicked code starts and resumes the audio (with setAudioChannels) after the file has been read into the fileBuffer. That isn’t actually the usual way things are done and the audio is rather allowed to continuously run, if possible. But that can get complicated to deal with and the tutorial’s purpose isn’t to go into those issues.

So when the new buffer comes in( load the file, or reader reads the file?), getNextAudioBlock () will imediately be called, but it still needs to take an Audiosource?

I am just so confused what you are actually trying to do…

The tutorial you posted the link to, does a very simple thing : in the openButtonClicked method it reads the whole audio file into an AudioBuffer and the AudioAppComponent::getNextAudioBlock copies sections from that buffer to play it. The AudioFormatReader is not involved at that point at all.

If your purpose is to play directly from an audio file, it is going to require different code. AudioFormatReaderSource and AudioTransportSource will be useful for that. There’s a separate tutorial that shows how to do that :

https://docs.juce.com/master/tutorial_playing_sound_files.html

Just wanna make sure what really triggers getNextAudioBlock ().
“reads the whole audio file into an AudioBuffer and the AudioAppComponent::getNextAudioBlock copies sections from that buffer to play it.”
So it’s the being filled action

No, filling the fileBuffer has nothing at all to do with that. The AudioAppComponent::getNextAudioBlock is called by Juce/the operating system when needed when the audio is playing. The tutorial obfuscates that because it stops the audio in openButtonClicked method and restarts it once the audio file has been written into the buffer. In actual code the audio would likely be kept playing all the time and the buffer that is played would somehow be safely switched.