I’m a new user of JUCE and have just been following the tutorials to familiarize myself with the program flow , classes and functions.
I just followed the tutorial on creating a Sin-Wave generator and at the moment:
within the prepareToPlay() function; which takes sampleRate as one of its arguments. However prepareToPlay() isn’t called within Main.cpp or MainComponent.cpp - only defined. Where is this function being called?/ Where is the value of sampleRate being set/passed as an argument?
If I wanted to alter the sample rate I could just assign a value to currentSampleRate in the MainComponent() constructor, rather than in prepareToPlay(); however since this is not the global sampleRate it will only affect the values going into the buffer and possibly cause errors IF other parts of the program are using a different sample rate.
I’ve been looking through the modules as well to figure out where the global sampleRate is declared and I’m a bit lost as to how the flow of code actually takes place.
Could someone please shed some light on this?
Is this a Plug-In or Standalone app?
If it’s a Plug-In it gets the sample rate from the host, if it’s an app then you’d normally have an Audio Settings dialog which lets the user change the session sample rate - if you use the JUCE AudioDeviceSelectorComponent it’ll trigger the prepareToPlay() for you.
That makes sense. I’m developing a Standalone Audio Application. I followed this tutorial https://www.juce.com/doc/tutorial_sine_synth
I’m going to give setting up AudioDeviceSelectorComponent a go; if I get stuck I’ll be back
AudioDeviceSelectorComponent is a nice frontend for the AudioDeviceManager.
You can also work directly on the AudioIODevice. The sampleRate is chosen when you call open on the device.
You can query the current value using AudioIODevice::getCurrentSampleRate(), because it can change during the lifetime, e.g. if you set something in OSXs AudioMidiSetup or via the device setup of your device / device driver.
When this happens, your algorithms are turned into Mickey Mouse. The AudioDeviceManager can be used to listen for an asynchronous signal, when the sampleRate (or any other setting) changes by adding a ChangeListener.
yess!! Thats actually what I was looking for. AudioIODevice::open() is similar to ofSoundStreamSetup() in openFrameworks - they take the same arguments and set up the basic IO requirements.
In the long term though- the way to go about it i suppose would be to work with the AudioDeviceManager and have an AudioDeviceSelectorComponent that gives the user control over those settings; like you would generally find in standalone apps.
I’ve been going through the API and Modules code and getting an idea of whats going on but I have a few questions regarding implementing these classes. I still havent really wrapped my head around the program flow. I’ve just returned home from a trip so now I’ve got access to the Getting Started with JUCE book - i’ll get onto the fundamentals through that and start a new topic if the book doesn’t answer my questions.