Short time lurker, first time poster… I’ve just put together my first JUCE app, which is also my first GUI.
I started from the example project (Win32) and the JUCE demo (audio demo), and I’ve thrown together an ‘Audio Monitor’ app that hijacks the ‘Settings’ panel and AudioInputWaveformDisplay from the audio demo, along with a ‘fader’ (slider) and mute button to allow the user to monitor/route two channels of audio both visually and audibly. (It’s quite handy, as I have no other easy way to hear the audio feed from my prototype FireWire audio board without opening up a DAW.)
It all seems to work fairly well, but I have several questions:
My fader applies a gain value to the samples provided by audioDeviceIOCallback(). I wonder if this qualifies as a hack? (I.e. am I supposed to do it this way, without using AudioSampleBuffers or AudioProcessors? I based it on the code in the audio demo, but it seems like maybe that was quick ‘n’ dirty code?)
What is the format of the samples that audioDeviceIOCallback() is providing? Is it documented somewhere? I gather that they are 32-bit floating point values between -1 and 1, but I can only tell that implicitly from assumptions the original coder of the demo made (that’s where his display would overflow its bounds).
I can route audio to and from my FireWire board, and see signals on my (now stereo) display, but while I can hear audio from the Windows Media Player, I can’t see it on my display… Anyone have any idea why that should be true? Is it just WMP being unruly, as usual?
Any pointers on how to make my slider look more like a fader? Are there hooks somewhere for me to supply image files in some format and/or to specify active areas and such?