Noob struggling with simple metronome

Happy New Year, Jucy people!

I’m a c++ noob moderately competent in PHP/AS3, aiming to build a drum machine (very much like Bram Bos’ Hammerhead).

I thought the first thing I should do was build a metronome that plays a note on a sampling synth every 500ms (120bpm)

So far I have…

… three basic classes >> MainComponent, Timer, Synth.

… pulled apart the juce demo, and made a Synth class which triggers a *.wav
synth->SamplerSound->AudioFormatReader->external *.wav

… built a Timerthread, set with top priority, which uses Time::waitForMillisecondCounter() to trigger every 500ms. This class then ActionBroadcasts out to the Main Component

… the Main Component ActionListens out for the Timer, and then in the callback has a noteOn/Off call to the Synth (and has the buttons/keyboard/Audio IO settings etc)

It all kinda works: it all triggers, and the tempo is rightish.
I used a closed hat sample, ran it against Rebirth, and there was a varying phase each time it triggered. DBG(String(Time::getMillisecondCounterHiRes())) at the noteOn point showed that there was a 10~12ms variance in timing. The tempo wasn’t slipping/drifting, it was just triggered up early/late

Maybe I am being pedantic, but I might as well get this much sorted before I continue. is my approach wrong? I tried a different approach with loading a wav into a TransportAudioSource, with similar results.

In short: help!

For timing accuracy you should be counting samples in the audio thread. The 10-12ms could be to do with the audio block size: A hardware audio block size of 512 samples at 44.1kHz is 11.6ms. When your timer thread triggers the audio for that moment in time has already been rendered a few ms earlier so there’s no way the metro click will actually start until the next audio block.

HTH

yeah, yeah I suppose it does. Thanks!

I read one thread where jules says that the Timer class should just be left for UI related timing, and then http://www.rawmaterialsoftware.com/viewtopic.php?f=2&t=2992&start=0&hilit=waitForMillisecondCounter another where he says it’s good for setting up a midi clock… but then you’d be using midi events to trigger sounds?

I’m quite confused about which class should be used for what, I found references to ticks in the Timer class, which I incorrectly thought to be Midi ticks, not cpu ticks… I’ll keep digging

We’ve had similar n00b metronome confusion stories a few times before - have you searched the forum for “metronome”?

I have, and with some success… I just probably need to read it all over a few times and get the ideas in…
After learning about PHP and Actionscript through the billions of (varying quality) posts on the tubes, the juce documentation and examples look a bit scant. Apart from Haydxn’s tutorial that I went through over Christmas (which has suffered over time) there doesn’t seem to be much which makes things bleeding obvious. At least not to my nooby copying over your shoulder eyes.
I think I’ve made some wrong assumptions - this is my first go at sound programming, so it’s all a bit exciting.
If I have success, I’m tempted to offer my findings with dodgy source code to prevent further confusion.

I’ll keep at it, sorry for filling up your board with nonsense and detaining you all from making awesome.


Back again. Yes.

In the demo’s SynthAudioSource Class, the getNextAudioBlock() function repeats as frequently as the the audio buffer empties out. By knowing the block size that the buffer empties at (default is 960? suppose it depends on your settings.), the sampling rate of playback (44100 Hz?), and how often we want the sound to trigger (2 Hz), we can count up to the right block and then delay the midi noteOn signal by a sub-block sample quantity.

:oops:

Is there a thread of resource that exists where I should go to read things that are fairly basic with audio programming? not necessarily juce related, although a juce focused “a slots in to b” explainergraph would be good.

I’ve spent quite a few hours over the last two weeks trying to iron this out with no luck, and some help would be much appreciated.

I pulled apart the demo code so I can load a file into a sampler, and trigger the sampler according to the toggle state of an array of 16 buttons. It works, but the timing is still fairly miserable. I am counting samples and setting offsets for the midi messages, trying both DirectSound and ASIO with decent sized buffers (default>>960) (on win7) , and the playback just skips, stutters and shuffles, only about 2/3s of the time does it seem be trigger cleanly

Here’s an excerpt of the main timing chunk

void getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill)
			{
			sampleCount+= samplesPerBlock;  //counts +960 w/ default settings
			while(sampleCount>nextStepSampleCount-samplesPerBlock){
				double offset = double(nextStepSampleCount-sampleCount);
				if(seqButtons->getShouldTrigger(currentStep%16)&& ! muteButton->getToggleState()){
					if(offset<2){offset=2;}
			 		MidiMessage message = MidiMessage(MidiMessage::noteOff(1,74),offset-1);
					midiCollector.addMessageToQueue(message);
					message = MidiMessage(MidiMessage::noteOn(1,74,float(velocitySlider->getValue())),offset);
					midiCollector.addMessageToQueue(message);
					}
				nextStepSampleCount+=samplesPerStep;
				currentStep++;
				}

			bufferToFill.clearActiveBufferRegion();
etc...

Should I be doing something with a thread and increasing priority? I thought it might be because I was in debug rather than release mode, but it didn’t seem to be the case. Might the sampler not like being called so much and should I try and render to blocks further along the buffer rather than the immediate block (I don’t even know what that would entail)?

Regardless, hope you’re all having buckets of fun building other stuff!

Are you sure that your getNextAudioBlock() is getting called with samplesPerBlock samples each time? You should probably inspect the AudioSourceChannelInfo struct and increment by numSamples instead. e.g.,

But there are few other uses of samplesPerBlock in that code, which should be bufferToFill.numSamples too.

You should also find another way of ascertaining your GUI state rather than making calls to component function in the audio thread.

I haven’t looked in detail at how you’re timestamping things but I’d start by fixing these two things first.

crikey, that was fast! I will have a look. I hadn’t thought of referencing the GUI as a bad idea, I might whip up an array somewhere else.

Back to building!

Mmmmmm…back again… I used buffer.numSamples as suggested, and being lazy, instead of wiring another array, I just hardcoded for it to trigger the sampler every step, and it still seems to be a bit all over the shop.

and, for clarification, when you say to avoid reading from components, is that just GUI bits (buttons/sliders?) or even arrays that sit in a class that inherit the Component class? I have avoided buttons and sliders, but am still working within a component. Maybe I need to seperate my classes a bit more?

Thanks martinrobinson for your help!

No components, no events. You’d probably be safe to call a WaitableEvent::notify() to wake something up, but I know a lot of people would avoid that. And avoid malloc or any class that might use malloc or new(). And avoid criticalsections. Basically, avoid doing pretty much anything at all!

1 Like

[quote=“jules”]Basically, avoid doing pretty much anything at all![/quote] That sounds more like life advice!

I trimmed down the class as much as possible, in the main component there is nothing but a AudioDeviceSelectorComponent and a slider set to call Sampler::setTempo on a value change. Other than that, I have no components, no events, nothing mallocy. It’s pretty much haydxn’s(?) starter code with the below class stuck in, which is derivative of the demo code. I can still hear an inconsistent (!) shuffle that mutates with a changing of either tempo or buffer size. I am tempted to blame my computer. I am sober, and sound of mind.

[code]class Sampler: public AudioSource
{
public:
AudioFormatManager audioFormatManager;
MidiMessageCollector midiCollector;
MidiKeyboardState& keyboardState;
Synthesiser synth;
double sampleCount, samplesPerStep, nextStepSampleCount;

	Sampler (MidiKeyboardState& keyboardState_)
		: keyboardState (keyboardState_)
		{
		sampleCount = nextStepSampleCount= 0.0;
		setTempo(150);
		synth.addVoice (new SamplerVoice());
		audioFormatManager.registerBasicFormats();
		setUsingSampledSound(File(String("C:\\_audio\\TR808WAV\\CH\\ch.wav")));
		}

	void setTempo(double newTempo){
		samplesPerStep = 44100/(newTempo/60)/4;
		}

	void setUsingSampledSound(File audioSourceFile)
		{
		synth.clearSounds();
		AudioFormatReader* audioReader = audioFormatManager.createReaderFor (audioSourceFile);
		BitArray allNotes;
		allNotes.setRange (0, 128, true);
		synth.addSound (new SamplerSound (T("demo sound"), *audioReader,allNotes,74,0.1,0.1,10.0));
		delete audioReader;
		}


	void getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill)
		{
		sampleCount+= bufferToFill.numSamples;
		while(sampleCount>nextStepSampleCount-bufferToFill.numSamples){
			double offset = double(nextStepSampleCount-sampleCount);
			//an unrelated problem, but the midiCollector complains if offset==0, hence offset+1
			MidiMessage message = MidiMessage(MidiMessage::noteOn(1,74,float(0.5)),offset+1);
			midiCollector.addMessageToQueue(message);
			nextStepSampleCount+=samplesPerStep;
			}
		bufferToFill.clearActiveBufferRegion();
		MidiBuffer incomingMidi;
		midiCollector.removeNextBlockOfMessages (incomingMidi, bufferToFill.numSamples);
		keyboardState.processNextMidiBuffer (incomingMidi, 0, bufferToFill.numSamples, true);
		synth.renderNextBlock (*bufferToFill.buffer, incomingMidi, 0, bufferToFill.numSamples);
		}

	~Sampler(){}
	void prepareToPlay(int samplesPerBlockExpected, double sampleRate){	midiCollector.reset(sampleRate);synth.setCurrentPlaybackSampleRate(sampleRate);	}
	void releaseResources(){}
};[/code]

I’m sorry about the unexcitingness of my problem, and am appreciative of any help.

…haven’t tested the code but again similar to my earlier suggesttion - you’re also assuming that the “start sample” is zero (for various calls).

E.g.,keyboardState.processNextMidiBuffer (incomingMidi, 0, bufferToFill.numSamples, true); synth.renderNextBlock (*bufferToFill.buffer, incomingMidi, 0, bufferToFill.numSamples);
…should be:

keyboardState.processNextMidiBuffer (incomingMidi, bufferToFill.startSample, bufferToFill.numSamples, true); synth.renderNextBlock (*bufferToFill.buffer, incomingMidi, bufferToFill.startSample, bufferToFill.numSamples);

Also I think the timestamps of your created messages should be set to the equivalent of

i.e., microseconds?

But since your creating them in an audio callback you probably shold work out microseconds from your sample count. (Otherwise they’ll get pretty much the same timestamps.)

	MidiMessage message = MidiMessage(MidiMessage::noteOn(1,74,float(0.5),offset));
	midiCollector.addMessageToQueue(message);

No no no no no!

	MidiMessage message = MidiMessage(MidiMessage::noteOn(1,74,float(0.5)));
	incomingMidi.addEvent(message,offset);

Yes yes yes yes yes!

Sorry, I should have RTFM’d a little harder. I found I had more luck feeding the message and timestamp/offset into a MidiBuffer, where as before I was trying to feed it into a MidiMessageCollector, which seems to be more suited toward handling incoming stuff, say from your midi controller keyboard. I was getting frustrated as the timestamps seemed like they were being ignored, but when you are playing, the soonest the software can react is the next buffer render, so you would end up with buffer-sized quantizing. Sorry… thinking aloud

Martin and Jules, thanks both for your direction!

I thought (after a cigarette) that this might have been because I was trying to do this as a stand alone thingy as opposed to a VSTi?

Jules, I’m a new coder trying a metronome VST build. I can’t find much help in the forum, I’m curious if you recommend any resources?

We have dozens of new tutorials, are none of them helpful?
https://juce.com/learn/tutorials

I’m struggling to understand the details of the demo code. I may need to take a web course on C++. My background is DSP with MATLAB. I’ve built a few effect plugins, but I want to branch into Virtual Instruments