Did you see that JUCE also has a great number of tutorials? One thing you could do is to just follow their tutorials to get started, if you encounter any errors with them you’ll find the writers of the tutorials round here in the forum and you’ll likely get better support.
Haven’t looked deeply at the code you linked, however I saw that they do some things that are considered to be “bad practice”, I saw that it uses ScopedPointer (which is deprecated for some time, the whole JUCE framework has been converted to use std::unique_ptr instead and you’ll get compiler warnings if you use ScopedPointer) and that it uses a #define MAX_VOICES 16 (which is considered to be old fashioned C style, better use a static const int member variable nowadays). Both things will work and should not lead to the misbehaviour you describe, but if you are currently learning C++ you might learn an even better coding style if you follow the JUCE tutorials
It looks like the setup() method that is supposed to set up the objects is never called in the code. (I guess the author intended that to be called from the constructor of the AudioProcessor subclass, but that’s not shown in the posted code.)
The code looks a bit suspicious in other aspects as well.
Despite the flaws in the code, it does work here. (After doing the obvious fixes like adding a call for setup() in the constructor etc…)
Maybe your problem is that it can’t load the audio file that is hard coded in the source code? You should put a full absolute path for that and if you are on Windows, use \\ as the path separators instead of /
OK, now that I tested the plugin as a VST2 and VST3 in Reaper, I am seeing the same problem : MIDI notes coming from a MIDI sequence are not played back. Live MIDI input doesn’t work either. (I tested the project as only a standalone app first.)
The AudioTransportSource is not suitable for a plugin. The start and stop is done on the message thread, so it is asynchronous. It will not start at the right sample and in an offline bounce not play at all.
Use the Synthesiser and it’s classes instead, that is optimised to handle multiple voices, starting and stopping, etc. For a sampler, look at SamplerSound.
And yes, the notes should come from the MidiBuffer in processBlock.
I just made as many audio transport sources as count of my samples and then mixed them with MixerAudioSource. So each button has 3 different audio transport sources, which I mixed with different gain in Mixer Audio Source. And it works.
But from you answer I understand, that it’s a bad solution…
The midi messages are timestamped within the block where they occur. So they are not supposed to start at sample zero. But if you use TransportAudioSource, they will always start at sample zero, which is a jitter of up to >40ms.
And another reason, there is even a Thread::sleep() when you call stop():
Definitely something to avoid in a product, but hard to spot, if you test your plugin in isolation.
If it was known, which one the audio thread is, it would be good to have an assert here, but I don’t think that is possible…
You can’t, the Juce Synthesiser class doesn’t support that use case.
You have to use multiple Synthesiser instances or mix your multiple sound sources in your voice objects. (For the latter, you would need to write your own sampler voice class.) Or hack the Juce Synthesiser source code to allow the sound layering from a single note…
One more thing.
Is it possible to change gain of each source? Each note has two sources and each source has gain knob, which user can change. With MixerAudioSource I just changed gain value of AudioTransportSource. But how can I do it now?