Using playSound() after button click


#1

Hello everyone,

I’ve been using/learning Juce since April. I love it! My current goal is to make a plug-in that is basically an EZkeys type plug-in which uses samples (.wav files) taken from a keyboard instrument. So there will be a picture of a keyboard in which the user will be able to click on any of its keys to play the desired pitch. At the moment I’m using a textButton as a test, to make sure it is done correctly. The best way to describe the issue that I’m having is that the sound that’s played once the button is clicked seems to play in a lower pitch than the .wav file. There also seems to be a memory leak happening somewhere.

Here is what I did: I created an audioDeviceManager object and used initializeWithDefaultDevices(0,2) in the PluginEditor’s constructor. Then, using the buttonStateChanged function, if the button isDown() it uses the playSound() function where I pass the binary data of the .wav file. For some reason the pitch of that .wav file becomes lower (maybe other issues in the audio that I haven’t noticed as well).
I notice that the memory leak warning appears only if the user clicks the button (so only if there is audio output). So I definitely feel like I’m doing something wrong. I’m under the impression that the playSound() function handles everything required to output sound.


#2

Hi Naslash,

You might want to show the few lines of code, where you call playSound and the declaration of the parameters, because there are several playSound methods. Without code it is pure guessing which is most of the times dissapointing for you and the one who tries to help…

you can check, on what sample rate your device is running. juce is not automatically adapting for the wav’s sample rate and the devices sample rate. You can use a ResamplingAudioSource for that purpose.


#3

Here it is:

void TestingAudioProcessorEditor::buttonStateChanged(Button* buttonThatWasChanged)
{
if (buttonThatWasChanged == testButton && buttonThatWasChanged->isDown())
{
dm.playSound(BinaryData::D5_HARD_1_wav, BinaryData::D5_HARD_1_wavSize);
}
}

dm is the audioDeviceManager object.

So what exactly does the playSound() function do? I thought it handles all of that, unless I misunderstood its purpose.


#4

When I went through the tutorials and some of the Juce videos from last year’s presentations, I got the impression that AudioSource is mainly used for standalone applications, so I generally stayed away from it since I’m creating a plug-in. I know that processBlock() is used in plug-ins instead of using AudioSource when communicating with its host, but I guess the thing that wasn’t clear to me was if I’m trying to output sound outside of processBlock(), such as while the button isDown(), is it recommended that I use AudioSource? Is it even recommended that I work with any audio output outside processBlock() or am I actually supposed to do all of this in processBlock()?


#5

you should fill a buffer and pass the samples to the buffer in processblock


#6

I would recommend to avoid using AudioDeviceManager(*) within a plugin. It’s almost certainly not going to work on some other people’s systems for various reasons under various circumstances, even if it now seems to work on your own system. The host application where your plugin lives, should take care of dealing with the audio hardware and your plugin should just output audio samples into to the host application.

(*) It’s really meant for standalone applications, like if you do your own DAW, to deal with the audio hardware input and output.


#7

Maybe you are confused by audio plugin and standalone. Xenakios is right, stay away from AudioDeviceManager in a plugin. You need a very good reason and a lot of experience, if you don’t.
An AudioSource is something, that produces audio samples e.g. from a file, or even out of nowhere. If you use klangfreunds docs (they are completely the same as on juce’s website, except that juce managed to break the javascript code to show the inheritance of the various AudioSource types. It’s broken for almost a year, I don’t know if they will fix it… issue)
They generally produce a continuous waveform by sequentially calling AudioSource::getNextAudioBlock(…)

An AudioPlugin in a DAW (aka AudioProcessor) processes audio data the host (your DAW like cubase, live, FL1 etc) sends through it. However, you can discard the incoming data and simply produce new samples, that’s up to you. That’s interesting either for signal generators and also for synth instruments.

So there are cases where you use AudioSources in an AudioPlugin, but if you process already present audio, you use the one from AudioProcessor::processBlock(…), that way the user of your plugin can create a chain of processing and combine different effects.


#8

I looked back into the tutorials and videos and found that I got confused. I somehow thought I remember AudioSource being mainly for standalone applications but it is actually audioDeviceManager as you guys said. Thank you for bringing that to my attention.
I will test it using AudioSource instead and see how it goes.

Thank you for your help everyone!