Rendering a tick?

For convenience I want to allow the user to turn on a metronome, what’s a good simple way to render a “tick” sound?

I was thinking, just go into the output sample buffer and manually write a 1, follow by a -1, at the sample index where I want the ticks…

…whaddya think? How would that sound if the output sample rate changed?

Anyone have any experience with this?

Ouch, you might find that some users get a bit upset about full scale impulses!

How about synthesising a click? http://www.kvraudio.com/forum/viewtopic.php?t=182737

Another possibly interesting link: SPECTRALLY MATCHED CLICK SYNTHESIS

Hey these guys used Juce! And there’s a screenshot of their app in the paper!

Damn, scanned it so fast I didn’t pick that up. It was a nice looking screen shot though!

The problem is that their implementation requires doing a FFT…hardly friendly for real-time uses.

Yep, I’ve had the pleasure of working with Matt Wright on some research before-- he’s a true genius (and also one of the authors of Open Sound Control, among many other things…)! Have you checked out CSL (Create Signal Library) which they are using at CREATE (UC Santa Barbara Center for Research in Electronic Art and Technology)? It’s based on JUCE 1.50…

http://fastlabinc.com/CSL/

I just mocked up the approach mentioned on the KVR link in SynthMaker and it works pretty well. Pitch a sine wave to taste and put it through an envelope with very fast attack, no hold or sustain and a low decay value. For a bit of grit, you can mix in some white noise at ~15-20% linear gain.

@Vinn - Are you open to the option of sample playback? I know thats what Ableton Live does, and I built a basic Metro class that just counts 96 note ticks, and every beat plays one of two samples (one for beat 1 and another for other beats)… I’m using a Synthesizer with one SamplerVoice, and two SamplerSounds

if (shouldPlay)
{
	if (beatCount == 0){
		samplerSynth.noteOn(0, 60, 1);
	}else{
		samplerSynth.noteOn(0, 61, 1 );
	}
}

I’m open to anything that works and doesn’t require much effort on my part. This is my code currently:

  void renderTicks (int numSamples, AudioSampleBuffer& out)
  {
    jassert (out.getNumSamples () >= numSamples);

    double const samplesPerBeat = 
      m_audioDevice->getCurrentSampleRate() * 60 / m_clock.tempo;

    int i = 0;
    for (;;++i)
    {
      int index = static_cast <int> ((i - m_clock.phase) * samplesPerBeat + 0.5);

      if (index < numSamples)
      {
        if (index >= 0)
        {
          out.getArrayOfChannels()[0][index] =  1;
          out.getArrayOfChannels()[1][index] = -1;
        }

        ++index;

        if (index >= 0 && index < numSamples )
        {
          out.getArrayOfChannels()[0][index] = -1;
          out.getArrayOfChannels()[1][index] =  1;
        }
      }
      else
      {
        break;
      }
    }
  }

Holy Hell, it seems that even the simplest appearing task having to do with audio processing, becomes a giant pain in the ass, especially when dealing with concurrent systems!!!

I’m using a tone generator with a sharp attack and moderate rolloff to produce the tick, and mixing it into my audio output. It works perfectly…

BUT (and there’s always a but) except but when I change the tempo in real time, I’m getting discontinuities between successive sample buffers because the tick is getting chopped in the middle and re-rendered with a different phase…

So now I have to over-engineer this stupid feature just to make it work! A simple tick!!!

[attachment=0]argh.gif[/attachment]

LOL

Maybe alternate between two different voices when you detect a tempo change?

I think simply playing back a sound is going to be the easiest solution, which won’t be effected by things such as tempo changes. In my metronome class, I simply create a Synthesizer as per the JUCE demo. I then have two sounds, one high tick/beep for beat 1, and another (lower pitched) sound for the other beats (2-4 for 4/4 time). Sound one is assigned to midi note 60, and sound two is assigned to midi note 61. I have my clock/beat counter counting my ticks, and every time we are on beat one, I send a noteOn note 60 to my Synthesizer, and on all other beats I send a noteOn note 61.

You could even do it with one sound, and have the note pitched over a range of midi notes, and then noteOn whatever pitch you like for beat one, and whatever pitch you like for the rest… But yah, I really do like the idea of synthesizing a tick / impulse, but for something like a metronome, it might just be easier to use JUCE’s built in Synthesizer class just to play back a sound…

cheers

To be honest the Synthesizer API intimidated me, and the need for Midi message passing discouraged me completely.

I ended up writing my own AudioSource that handles the tricky task of rendering the tick into the output stream (and dealing with the cases for when only part of the tick fits in the output buffer, or the tempo changes mid-tick, etc).

So you are saying the Synthesizer is easiest? What do I need to do?

Heres what Im doing… My Metronome class extends AudioSource. I first create an instance of Synthesizer , lets call him samplerSynth. I create two .wav AudioFormatReaders, and load up my two sound files (one a high tick, one a lower tick). I create a MidiMessageCollector to store our the midi messages. Then I set the midi note ranges I want for each sound file (really just one midi note for each), add a Voice to my Synthesizer for playback, and then add the two sounds as SamplerSounds. This sounds like a lot but its actually pretty simple: (in my Metronomes constructor…)

audioFormatManager.registerFormat(wavAudioFormat = new WavAudioFormat(), true);
	//First we must load out metronome .wav files into a reader
	metroHighReader = audioFormatManager.createReaderFor(File("/Users/jhochenbaum/Documents/C++/Nuance/data/Sounds/MetronomeHigh.wav")); //TODO make this path relative
	metroLowReader = audioFormatManager.createReaderFor(File("/Users/jhochenbaum/Documents/C++/Nuance/data/Sounds/MetronomeLow.wav"));//TODO make this path relative
	
	//Define the midi note ranges each sound will be assigned to...
	BigInteger highNotes;
	highNotes.setRange (60, 60, true);
	BigInteger lowNotes;
	lowNotes.setRange (61, 61, true);
	
	//then we must create SamplerSounds from the readers, and add them to our synth
	samplerSynth.addVoice(new SamplerVoice()); //this voice will be used to play the sound
	samplerSynth.addSound(new SamplerSound("metroHigh", *metroHighReader,highNotes,60,0,0,1.0));
	samplerSynth.addSound(new SamplerSound("metroLow",*metroLowReader,lowNotes,61,0,0,1.0));
	samplerSynth.setNoteStealingEnabled(false); //must turn note stealing off

Then to make sure samplerate is set correctly…

void Metronome::prepareToPlay (int /*samplesPerBlockExpected*/, double sampleRate){
	midiCollector.reset (sampleRate);
	samplerSynth.setCurrentPlaybackSampleRate (sampleRate);
}
void Metronome::getNextAudioBlock (const AudioSourceChannelInfo& bufferToFill){
	// the synth always adds its output to the audio buffer, so we have to clear it first..
	bufferToFill.clearActiveBufferRegion();
	
	// fill a midi buffer with incoming messages from the midi input.
	MidiBuffer incomingMidi;
	midiCollector.removeNextBlockOfMessages (incomingMidi, bufferToFill.numSamples);
	
	// and now get the synth to process the midi events and generate its output.
	samplerSynth.renderNextBlock (*bufferToFill.buffer, incomingMidi, 0, bufferToFill.numSamples);
}

I’m not sure how you are counting ticks and whatnot, but I’m doing mine in my threads Run() method. I wait for a tick duration, increment my tickCounter, then call a method called playSound

void Metronome::playSound() {
	if (tickCounter % 96 == 0) {
	samplerSynth.allNotesOff(0, false);
	
	if (shouldPlay) //this gets set when the user enables the Metronome from the GUI
	{
		if (beatCount == 0){
			samplerSynth.noteOn(0, 60, 1); //play sound one on the first beat
		}else{
			samplerSynth.noteOn(0, 61, 1 ); //else play sound two on all other beats
		}
	}
	
	beatCount++;
	
	if (beatCount >= beatMax)
		beatCount = 0;
	}
}

So yah, you might have a more accurate metronome counter-- I haven’t tested the consistency of how im counting my actual ticks, but whats important here I guess is that Im just creating a synthesizer, adding a voice, adding two sounds which get played back on specific midi notes (in this case 60 and 61), and then depending on which beat were on, I send a noteOn message to my Synthesizer, instructing it which sound to play back… your way definitely sounds interesting, but this seemed simplest to me… also allows an arbitrary sound to be used as the metro sounds which is kinda cool

Best,
J

This is what I wrote. “createTick” of course can be changed to produce any type of sample. The caller is responsible for calling setClock() before every getNextAudioBlock(), with the appropriate phase.

class Metronome : public AudioSource
{
public:
  Metronome ()
    : m_tick (2, 0)
    , m_lastSampleIndex (0)
  {
  }

  void prepareToPlay (int samplesPerBlockExpected,
                      double sampleRate)
  {
    m_sampleRate = sampleRate;
    m_lastSampleIndex = 0;

    createTick ();
  }

  void releaseResources ()
  {
  }

  void setClock (double tempo, double phase)
  {
    m_tempo = tempo;
    m_phase = phase;
  }

  void getNextAudioBlock (AudioSourceChannelInfo const& bufferToFill)
  {
    int const numSamples = bufferToFill.numSamples;

    AudioSampleBuffer out (bufferToFill.buffer->getArrayOfChannels (),
                            bufferToFill.buffer->getNumChannels (),
                            bufferToFill.startSample,
                            numSamples);

    out.clear ();

    // This is where we will start looking in the output for the next tick.
    //
    int startIndex;

    // Handle leftover
    if (m_lastSampleIndex < 0)
    {
      int amount = m_tick.getNumSamples() + m_lastSampleIndex;

      // See if all the leftovers will fit in the output
      if (amount <= numSamples)
      {
        // Yes, so copy everything
        Dsp::copy (out.getNumChannels(),
                    amount,
                    out.getArrayOfChannels(),
                    (vf::StereoSampleBuffer (m_tick) - m_lastSampleIndex).getArrayOfChannels());

        // No more leftovers
        m_lastSampleIndex = 0;

        // Start the next tick after the end of this one.
        startIndex = amount;
      }
      else
      {
        // Leftovers too big for output, copy what will fit
        Dsp::copy (out.getNumChannels(),
                    numSamples,
                    out.getArrayOfChannels(),
                    (vf::StereoSampleBuffer (m_tick) - m_lastSampleIndex).getArrayOfChannels());

        m_lastSampleIndex -= numSamples;

        startIndex = 0; // unused
      }
    }
    else
    {
      // No leftover so start from beginning of the output.
      startIndex = 0;
    }

    if (m_lastSampleIndex == 0)
    {
      double const samplesPerBeat = m_sampleRate * 60 / m_tempo;

      // Adjust phase so the beat is on or after the beginning of the output
      double beat;
      if (m_phase > 0)
        beat = 1 - m_phase;
      else
        beat = 0 - m_phase;

      // Render new ticks
      for (;;beat += 1)
      {
        // Calc beat pos
        int pos = static_cast <int> (beat * samplesPerBeat + 0.5);

        if (pos < numSamples)
        {
          if (pos >= startIndex)
          {
            // See if we can render the whole thing
            if (pos + m_tick.getNumSamples() <= numSamples)
            {
              // Full copy
              Dsp::copy (out.getNumChannels(),
                          m_tick.getNumSamples(),
                          (vf::StereoSampleBuffer (out) + pos).getArrayOfChannels(),
                          m_tick.getArrayOfChannels());
            }
            else
            {
              // Partial copy
              int const amount = numSamples - pos;

              Dsp::copy (out.getNumChannels(),
                          amount,
                          (vf::StereoSampleBuffer (out) + pos).getArrayOfChannels(),
                          m_tick.getArrayOfChannels());

              m_lastSampleIndex = -amount;
              break;
            }
          }
          else
          {
            // Tick overlaps the previous one, skip it
          }
        }
        else
        {
          break;
        }
      }
    }
  }

private:
  void createTick ()
  {
    int const attackMs = 2;
    int const milliSeconds = 6;

    int const attackSamples = static_cast <int>
      ((m_sampleRate * attackMs + 500) / 1000);

    int const numSamples = static_cast <int>
      ((m_sampleRate * milliSeconds + 500) / 1000);

    m_tick.setSize (2, numSamples, false, false, true);

    m_tick.clear ();

    AudioSampleBuffer temp (2, numSamples);

    mixTone (440,  temp);
#if 0
    mixTone (110,  temp);
    mixTone (220,  temp);
    mixTone (880,  temp);
    mixTone (1760, temp);
#endif

    {
      vf::NoiseAudioSource as (true);
      as.prepareToPlay (numSamples, m_sampleRate);

      AudioSourceChannelInfo info;
      info.buffer = &temp;
      info.numSamples = temp.getNumSamples();
      info.startSample = 0;
      as.getNextAudioBlock (info);

      for (int i = 0; i < m_tick.getNumChannels (); ++i)
        m_tick.addFrom (i, 0, temp.getArrayOfChannels()[i], numSamples);
    }

    for (int i = 0; i < m_tick.getNumChannels (); ++i)
    {
      m_tick.applyGainRamp (i, 0, attackSamples, 0, 1.f);
      m_tick.applyGainRamp (i, attackSamples, numSamples - attackSamples, 1.f, 0);
    }
  }

  void mixTone (double frequency, AudioSampleBuffer& temp)
  {
    int const numSamples = temp.getNumSamples ();

    ToneGeneratorAudioSource as;
    as.setAmplitude (1);
    as.setFrequency (frequency);
    as.prepareToPlay (numSamples, m_sampleRate);

    AudioSourceChannelInfo info;
    info.buffer = &temp;
    info.numSamples = numSamples;
    info.startSample = 0;
    as.getNextAudioBlock (info);

    for (int i = 0; i < m_tick.getNumChannels (); ++i)
      m_tick.addFrom (i, 0, temp.getArrayOfChannels()[i], numSamples);
  }

private:
  AudioSampleBuffer m_tick;
  double m_sampleRate;
  double m_tempo;
  double m_phase;
  int m_lastSampleIndex;
};

Hmm…I don’t actually count the ticks. Beats are located at phase=0, where phase runs from the half open interval [-.5, .5). In getNextAudioBlock() I loop over integer multiples of phase (0, 1, 2, 3, etc) and calculate the sample index, within the output buffer, where the tick would start. If the whole tick doesnt fit, I keep a state variable (the “leftover”) and on the next call I continue the output.

So, I’m not actually using a timer or a separate thread what I am doing is measuring time based on samples processed in the audio I/O callback. Given the sample rate, and the desired tempo, it is fairly easy to compute the phase of the first sample in each output block.

First we need to know the samples per beat. Which can be fractional, in the case where source material is time stretched:

double const samplesPerBeat = audioDevice->getCurrentSampleRate() * 60 / clockTempo;

Give numOutputSamples, which is the number of samples to process in the audio I/O callback, we can advance the phase appropriately:

if (clockPhase < 0)
  clockPhase += 1;
clockPhase = clockPhase + numOutputSamples/ samplesPerBeat;
clockPhase = clockPhase - floor (clockPhase)
if (clockPhase >= 0.5)
  clockPhase -= 1;

To use this same technique with a Synthesizer, I would have to compose a midi buffer and place notes at beats, with the correct timestamps…if that’s even possible?

How about a single sample impulse, sent through a resonant 2nd order filter? This will smooth out the transition ever so slightly, and add a bit of pitch to the click, depending on the freq and Q of the filter. The standard 2-pole reson filter would work fine - no need for zeros here.

Sean Costello

Well I finally broke down and rewrote my metronome to use the synthesiser instead. It works great! Instead of triggering the ticks from a timer, I just produce a MidiBuffer that has noteOn placed at calculated sample numbers:

void getNextAudioBlock (AudioSourceChannelInfo const& bufferToFill)
{
  int const numSamples = bufferToFill.numSamples;

  // the synth always adds its output
  //bufferToFill.clearActiveBufferRegion();

  MidiBuffer midi;

  if (m_active)
  {
    double const samplesPerBeat = m_sampleRate * 60 / m_tempo;

    // Adjust phase so the beat is on or after the beginning of the output
    double beat;
    if (m_phase > 0)
      beat = 1 - m_phase;
    else
      beat = 0 - m_phase;

    // Set notes in midi buffer
    for (;;beat += 1)
    {
      // Calc beat pos
      int pos = static_cast <int> (beat * samplesPerBeat);

      if (pos < numSamples)
      {
        midi.addEvent (MidiMessage::noteOn (1, 84, 1.f), pos);
      }
      else
      {
        break;
      }
    }
  }

  m_synth.renderNextBlock (*bufferToFill.buffer,
                          midi,
                          0,
                          bufferToFill.numSamples);
}

[quote=“TheVinn”]Well I finally broke down and rewrote my metronome to use the synthesiser instead. It works great! Instead of triggering the ticks from a timer, I just produce a MidiBuffer that has noteOn placed at calculated sample numbers:
[/quote]

Nice one! I thought I’d try this out as I don’t like how I’m currently generating my ticks-- basically in my threads run() Im using Time::waitForMillisecondCounter to wait a tick, then i increment a tickCounter, and call a method which sees if a full beat has passed (tickCounter%96 == 0), and then depending on which beat were on, it plays a midiNote so one sound plays on beat 1, and another on the rest, eg 2-4. Anyway, I don’t thik waitForMillisecondCounter is necessarily accurate enough, and I think my metro slightly shifts around, so I wanted to try this out.

I implemented your code above but I’m getting strange results— I either get a lot of ticks (96 note ticks?) or no output at all… it also doesn’t seem to change if I change m_tempo… is there something else Im missing? Time to try to breakdown and understand what you’re actually doing :slight_smile:

At the start of every audio device i/o callback, you need to call Metronome::updateClock() with the proper phase. The phase is a number in the half open interval [-.5, .5). When phase==0 it means that the very first sample of the current audio output block is the beginning of a new beat. When phase > 0 it means that the first sample of the current audio block lies just past the beat.

The thing that you are missing is the piece of code in the audio i/o callback that updates the phase based on how many samples will be processed (and the current tempo and sample rate).

Here is a direct copy of the function that I use. Of course, you will need to modify it to suit your implementation. It must be called in the beginning of each audio i/o callback:

  void advanceClock (int numSamples)
  {
    double const samplesPerBeat = 
      m_audioDevice->getCurrentSampleRate() * 60 /
        m_params.masterClockTempo.getValueMixer ();

    jassert (m_clock.phase >= -.5 && m_clock.phase < .5);

    if (m_clock.phase < 0)
      m_clock.phase = m_clock.phase + 1;

    m_clock.phase = m_clock.phase + numSamples / samplesPerBeat;

    if (m_clock.phase >= .5)
      m_clock.phase -= 1;

    jassert (m_clock.phase >= -.5 && m_clock.phase < .5);
  }

m_clock.phase is the value passed into Metronome::updateClock()

numSamples is the number of samples that will be processed in this audio i/o callback.

Also, I use a single voice and I have note stealing turned on, to handle the case where the tick sample is longer than the beat length.