Synthesiser


#1

Hi, are there any good Synthesiser examples? I’m trying to get to grips with it…
it’s always tricky to get into the philosophy of a framework without relevant example projects. Anyway…

I made some derived classes, and I’m passing a dummy empty MIDIBuffer into the Synthesiser audio callback, and explicitly calling noteOn and noteOff. First of all, even though I added 8 voices and 8 sounds, the playback is only ever monophonic. Secondly, startNote and stopNote in the voice are called multiple times, (something like 30 or so) even just for a note on, which seems pretty weird. Any ideas apreciated.


#2

OK well… I’m calculating “ticks” to put into the midi message timestamps, but just found that I was calculating them in a faulty way (keeping an accumulated sample count, but only updating this once per block instead of once per sample) which would account for multiple messages being generated within a callback block! :oops:
Have to dash now but will post later if I gain any further insights.


#3

OK so… I’m having a problem implementing polyphony.
I’ve added 8 voices and 1 sound in the Synthesiser subclass. The sound returns true for appliesToNote and appliesToChannel, no matter what.
The different voices get triggered as might be expected (on the next free one) but only the first voice is heard, even though the blocks are being calculated for each voice. What have I neglected to do to hear the other voices?


#4

Just a quick point but are you definitely ADDING your sound samples to the AudioSampleBuffer passed to your renderNextBlock () callback and not replacing it? If you overrite this you will probably just be left with either the first or last voice in the buffer depending on which way the loop counts.


#5

Hi Dave, thanks for your quick point! You are on the money - I expected that the Synthesiser would do the adding itself.

For anyone who might find this useful, in my main audio callback is the following:

AudioSampleBuffer sampleBuffer = AudioSampleBuffer(outputChannelData, numOutputChannels, numSamples); sampleBuffer.clear(); synth->renderNextBlock(sampleBuffer, midiMessageBuffer, 0, numSamples);

I am clearing the buffer first otherwise the microphone in signal gets used (and because I’m using speakers there is feedback.)

Then in the SynthesiserVoice subclass:

[code]void MySynthesiserVoice::renderNextBlock (AudioSampleBuffer& outputBuffer, int startSample, int numSamples) {

int numBufferChannels = outputBuffer.getNumChannels();
int numBufferSamples = outputBuffer.getNumSamples();
double sampleRate = this->getSampleRate();

float** outputChannels = outputBuffer.getArrayOfChannels();

for(int sampleCount = 0; sampleCount < numBufferSamples; sampleCount++) {

float thisSampleValue = squareLevel * amp;
for(int channelCount = 0; channelCount < numBufferChannels; channelCount++) {
  outputChannels[channelCount][sampleCount] += thisSampleValue;
};
squareCount++;
if(squareCount > ((1.0 / toneHz) * sampleRate)) {
  squareCount = 0;
  squareLevel = -squareLevel;
};
accumulatedSampleCount++;

};

}
[/code]

I hope it’s clear what’s going on from this snippet. Of course it is also best to scale the signals so they don’t distort when added up.