Getting the System's Default Audio Device


#1

What's the correct method for getting the system's audio device?   I have a small application that is using the MidiKeyboardComponent.  Right now, if i run the app alongside my DAW (Logic X), i get clicks and pops because audioDeviceManager.initialiseWithDefaultDevices(1,1); is not sure what device or sample rate to use.   

 

When my app loads, I would like to:

  • Query the system and find out what device is being used as the default device (Apogee Symphony I/O)
  • Copy whatever settings are discovered that this device is set to (sample rate, buffer size if any, default input, etc)
  • Initialize the audioDeviceManager with these settings so that I can run my app alongside Logic X without getting clicks and pops.

I'm surprised there isn't a "AudioDeviceManager::GetSystemDevice()" method for this exact purpose.

this post from 2006 showed a fix that i'm not sure if it's been added to the codebase yet:

http://www.juce.com/forum/topic/how-get-default-output-device


#2

It should correctly pick defaults and work.. I don't understand why you're blaming clicks and pops on the fact that a DAW is also running - audio drivers should be able to play cleanly regardless of what other apps are doing (?)


#3

for whatever reason, when I run my app, it tries to change the sample rate on my Interface, which Logic is controlling.  this is the cause of the clicks and pops.   

I'm using audioDeviceManager.initialiseWithDefaultDevices(1,1);

I'm guessing I need to supply more information for it to know how to configure the audio device.

Can you elaborate on how getting the default device works?    

 

I'll see if I can run my app first, and then load up logic and see if I still get the clicks/pops.  I don't get clicks and pops when i don't run my app.  I'm almost 100% sure it's a sample rate issue.  My app (probably) defaults to 44.1khz and my Logic sessions are at 48Khz. 


#4

Can you elaborate on how getting the default device works?    

It's different for all audio drivers, but you can step into the code and see exactly what happens if you need more detail.

The thing is, if there are pops and crackles because your audio driver can't handle two apps running at different rates, then that's a bug or problem in the driver. It's not the responsibility of an app to somehow change its sample rate to keep the driver happy - if the driver says it can do 44.1 and then fails to work when the app chooses 44.1, that's not the app's fault!


#5

No audio hardware can support running at multiple sample rates, so this is not a problem specific to my hardware.   The problem is that your AudioDeviceManager is trying to change the device settings and another application is already controlling the settings.  

Can you explain how to get the current system device, get its settings, set up the AudioDeviceManager with those settings, and then initialize the AudioDeviceManager?  

It's not the responsibility of an app to somehow change its sample rate to keep the driver happy

Perhaps not, but the app should default to reading what the driver's sample rate is currently set to, and set it up that way.   your AudioDeviceManager class doesn't do that, as far as I can tell.  It just grabs the driver, and tells it to use ___ sample rate and ___ buffer size instead of checking the current driver settings and useing those.

If you have an external audio interface like an Apogee Duet or MOTU box, turn it on, turn on your DAW (garageband, logic, protools etc), then run your demo app.  you'll see it try to take over your interface and set whatever sample rate your AudioDeviceManager class is programmed to default to.  and then you'll hear the clicks and pops because each program is trying to set the sample rate of the driver when they are the application in focus


#6

Ok, Here's a screenshot showing exactly what I'm talking about.  This is the debugger in XCode.  you'll see the Logic Project Setting window where the sample rate for the project is 48,000hz.  I also have Apogee's control panel which shows the Symphony I/O's sample rate.  Then you'll see the debugger in Xcode showing the audioDeviceManager->sampleRate currently set to 44100, even tho the audioDeviceManager->currentSetup->outputDeviceName is the Symphony. 

 

 

So, please tell me how to set the audioDeviceManager->sampleRate and audioDeviceManager->bufferSize to whatever the CURRENT system device is set to.  

 

As far as I can tell, what I need to do is use the AudioToolbox framework to get the current system device's settings, store them into a juce::AudioDeviceManager::AudioDeviceSetup object and then use that object in AudioDeviceManager.initialise();   Correct?


#7

No audio hardware can support running at multiple sample rates, so this is not a problem specific to my hardware.   The problem is that your AudioDeviceManager is trying to change the device settings and another application is already controlling the settings.  

Obviously the hardware runs at a single rate, but a driver can support multiple apps at different rates by samplerate-converting some of them to match the hardware rate. 

Many (most?) drivers do this very well - e.g. if you use the Apple built-in soundcard, it'll happily run a mixtures of apps at different rates.

If a driver doesn't support multiple rates, then it's a mistake for it to tell the app that it can do so. If the first audio app chooses e.g. 48000 and the hardware is set to that rate, then the driver should tell subsequent apps that 48000 is the only rate it can use.

My code already checks the device's list of possible rates before deciding what to use as a default - perhaps it could be tweaked to prefer the value of kAudioDevicePropertyNominalSampleRate if there's a choice - if you want to experiment with that kind of thing and let me know if it helps with your particular setup, it's something I'd be open to looking at.


#8

Many (most?) drivers do this very well - e.g. if you use the Apple built-in soundcard, it'll happily run a mixtures of apps at different rates.

It's not the driver doing the conversion.  it's the application itself doing it in realtime via AudioConverterServices from the AudioToolbox framework.  

 

yes, your code defaults to the first available sample rate in the list of sample rates returned by the driver.  It would be awesome if it defaulted to the currently selected sample rate.  

 

Here is the solution I arrived at, borrowing a lot from Learning Core Audio's section on getting hardware sample rates.  

OSStatus GetDefaultInputDeviceSampleRate(Float64 *outSampleRate) {
    OSStatus error;
    AudioDeviceID deviceID = 0;
    AudioObjectPropertyAddress propertyAddress;
    UInt32 propertySize;

    //
    propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;
    propertySize = sizeof(AudioDeviceID);

    //
    error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                                                 &propertyAddress,
                                                 0,
                                                 nullptr,
                                                 &propertySize,
                                                 &deviceID);
    if( error) return error;

    //
    propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;

    propertySize = sizeof(Float64);
    //gets property( nominal sample rate)
    error = AudioHardwareServiceGetPropertyData(deviceID,
                                                &propertyAddress,
                                                0,
                                                nullptr,
                                                &propertySize,
                                                outSampleRate);
    return error;
}

OSStatus GetDefaultInputDeviceName(CFStringRef *name) {
    OSStatus error;
    AudioDeviceID deviceID = 0;
    AudioObjectPropertyAddress propertyAddress;
    UInt32 propertySize;

    //sets which property to check
    propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;
    propertySize = sizeof(AudioDeviceID);

    //gets property (system output device)
    error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                                                 &propertyAddress,
                                                 0,
                                                 nullptr,
                                                 &propertySize,
                                                 &deviceID);

    if( error) return error;    //we couldn't get the default system device
    //sets which property to check
    propertyAddress.mSelector = kAudioObjectPropertyName;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;
    propertySize = sizeof(String);

    //gets property (name)
    error = AudioHardwareServiceGetPropertyData(deviceID,
                                                &propertyAddress,
                                                0,
                                                nullptr,
                                                &propertySize,
                                                name);
    return error;
}

OSStatus GetDefaultInputDeviceBufferSize(UInt32 *bufferSize) {
    OSStatus error;
    AudioDeviceID deviceID = 0;
    AudioObjectPropertyAddress propertyAddress;
    UInt32 propertySize;
    //sets which property to check
    propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;
    propertySize = sizeof(AudioDeviceID);
    //gets property (system output device)
    error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                                                &propertyAddress,
                                                0,
                                                nullptr,
                                                &propertySize,
                                                &deviceID);
    if( error) return error;    //we couldn't get the default system device
    //sets which property to check
    propertyAddress.mSelector = kAudioDevicePropertyBufferFrameSize;
    propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
    propertyAddress.mElement = 0;
    propertySize = sizeof(bufferSize);  //UInt32
    //gets property (bufferSize)
    error = AudioHardwareServiceGetPropertyData(deviceID,
                                                &propertyAddress,
                                                0,
                                                nullptr,
                                                &propertySize,
                                                bufferSize);
    return error;
}

//in my application window constructor
juce::AudioDeviceManager::AudioDeviceSetup defaultDeviceSetup;
int inChan = 1;
int outChan = 1;

//get current system device sample rate
Float64 sampleRate;
GetDefaultInputDeviceSampleRate(&sampleRate);

//get current system device name
CFStringRef deviceName;
GetDefaultInputDeviceName( &deviceName);
CFShow( deviceName);
String name = String( CFStringGetCStringPtr(deviceName, kCFStringEncodingMacRoman) );

//get current system device buffer size
UInt32 bufferSize;
GetDefaultInputDeviceBufferSize( &bufferSize);

//update our device setup with these gathered values
defaultDeviceSetup.sampleRate = sampleRate;
defaultDeviceSetup.outputDeviceName = name;
defaultDeviceSetup.inputDeviceName = name;
defaultDeviceSetup.bufferSize = bufferSize;

//initialise our audioDeviceManager with this default system device setup.
audioDeviceManager.initialise(inChan, outChan, nullptr, false, name, &defaultDeviceSetup );
//audioDeviceManager.initialiseWithDefaultDevices(1, 1); //old way of doing it

This problem is solved.  Hopefully someone else will find this helpful. 


#9

It's not the driver doing the conversion.  it's the application itself doing it in realtime via AudioConverterServices from the AudioToolbox framework.  

Ok, but my point is that you don't actually have to implement this explicitly, you can just open the device at whatever rate you need and the OS/driver/toolkit/magic-pixies deal with it.

But I did see a place where I could easily use the device's current rate when opening with the default rate - I'd be interested to know if this helps with your driver.


#10

I'm not writing a Driver.    I'm just writing a little piano roll viewer app that I can use alongside Logic for some video tutorials, and your audioDeviceManager class was trying to set the sample rate for my audio interface while logic was currently controlling it.  My viewer app wasn't even using audio, it's strictly MIDI.  but you haven't written a midi-only device manager class. you rolled it into an Audio+MIDI class.    

 

Applications like Quicktime or iTunes automatically convert whatever files they're playing to match the sample rate of the device being used to play thru.   They don't tell the device to switch sample rates to match the file they're playing.   I think that's where the confusion comes in.   

 

What I was trying to convey was that when you open a device with your AudioDeviceManager class, it should open it at whatever sample rate it is currently set to.  It shouldn't try to change the sample rate, unless the user or programmer specifies it.   THAT should be the default behavior.   Right now, your class' default behavior is to open the device, and set the sample rate to the first(usually the lowest) sample rate returned from the device's driver.    Understand?  


#11

I reported a similar issue when using the JUCE host and two commercial JUCE based apps (Addictive Drums 2 standalone and Universal Apollo Twin).

https://forum.juce.com/t/sampleratehaschanged-on-osx

Rail


#12

Hi,

I recently stumbled upon this thread as I started to explore Juce a bit out of personal interest .

i am an engineer at Apogee and worked on the products mentioned so I can pitch in here.

the first thing to realize is yes hardware can only run at 1 rate at a time , listing multiple rates does not suggest you can run at more than 1 at the seem time .

in fact any audiodevice only has 1 current nominal rate

in any app I can think of its a select 1 out of N dropdown not multiple selections .

 

secondary it's important to realize Logic will enforce its rate (so does GarageBand). You can do this with built in audio or anything.

 

open logic with a song at some rate

open Audio MIDI Setup and change the rate of the hardware to something else

 

you'll see it jump back (is set back by Logic)

 

In fact you could make GarageBand and logic fight eachother opening both using the same audio decide and at different rates they will ping-pong both trying to set the rate whenever they get notified the rate changed.

in short to avoid this you this either have to

- get the device rate before using it and run app at that rate

-use coreaudio's (or your own) samplerate conversion

 

and sure there are ways to have core audio instantiate rate converters in the app itself (not in drivers ), some apps (like Logic) on purpose do not do this to maintain bit perfect non-sample rate converter quality (for those using coreaudio's sample rate conversion I suggest you dig into settings giving you various degrees of quality vs CPU load by the way )


#13

Jules, it would be great if you updated the AudioDeviceManager class to reflect the stuff discussed in this thread so your class doesn't try to hijack the audio devices the way it currently does.    When you instantiate the AudioDeviceManager, it should grab the settings currently set by the driver, and configure itself to those settings as opposed to what it does now, which is force the device to use the first available setting in the list of settings provided by the driver.


#14

I'm confused..? That's already exactly what AudioDeviceManager::chooseBestSampleRate() does.

If you call AudioDeviceManger with a 0 sample rate then it certainly will choose the rate that the device is already using.


#15

the problem is that there is no AudioDeviceManager::getSystemDeviceAndSystemSettings().  if there were, then it would be very painless to do this:

AudioDeviceManager::AudioDeviceSetup defaultDeviceSetup = AudioDeviceManager::getSystemDeviceAndSystemSettings();
SharedResoucePointer<AudioDeviceManager> audioDeviceManager;
audioDeviceManager->initialise( 1, 1, nullptr, false, defaultDeviceSetup.inputDeviceName, &defaultDeviceSetup );

instead, I have to do what is explained here:

http://www.juce.com/comment/311053#comment-311053  (reply #8)

 

 

There should be an easy means of getting the system device, in the situations where your application doesn't need to its own AudioDeviceManager object, but just needs to talk to the MIDI system.   Currently there isn't, and the initialiseWithDefaultSetup() method forces the default system device to change its settings to the lowest values.

 

double AudioDeviceManager::chooseBestSampleRate (double rate) const
{
  jassert (currentAudioDevice != nullptr);
  const Array<double> rates (currentAudioDevice->getAvailableSampleRates());
  if (rate > 0 && rates.contains (rate))
    return rate;

///////// Right here, how is 'currentAudioDevice' defined when you instantiate AudioDeviceManager?
  rate = currentAudioDevice->getCurrentSampleRate();
  if (rate > 0 && rates.contains (rate))
    return rate;


  double lowestAbove44 = 0.0;
  for (int i = rates.size(); --i >= 0;)
  {
    const double sr = rates[i];
    if (sr >= 44100.0 && (lowestAbove44 < 1.0 || sr < lowestAbove44))
      lowestAbove44 = sr;
  }
  if (lowestAbove44 > 0.0)
    return lowestAbove44;
  return rates[0];
}

 

 


#16

You don't need to do that, just pass a nullptr instead of giving it a setup object, and it'll use the default one, at its current sample rate.

Or give it a setup object that has a 0 sample rate.

Obviously if you give it a setup containing a non-zero sample rate then it'll try to use the rate you're asking for. I really don't understand the problem here.


#17

What kind of audio interface do you have connected to the computer you develop Juce on? 

 

Try this.   Run your DAW of choice, and set the sample rate in the DAW to something other than the first entry in the list of available sample rates (which is usually 44.1khz).   Next, open up the Juce Demo.  Keep an eye on your audio interface's control panel where the sample rate is displayed.   you'll see Juce Demo try to change the sample rate to 44.1khz, even tho your DAW originally set it to something other than 44.1khz (like 48khz or 96khz).   if you switch to your DAW, it'll change back to whatever the daw was set to. 


#18

Put a breakpoint at the start of AudioDeviceManager::chooseBestSampleRate(), run the juce demo, and go to the audio settings demo page.

It'll stop in that function, and if you step forward, you'll see that it asks the device for its current sample rate, and uses that rate as its default. It does NOT force it to the first value in the list of sample rates unless you use the combo box to explicitly do so.

 


#19

Here's a 5-minute video to prove otherwise.  watch it full screen at 1080p HD to catch everything as I just did a screencast. 

https://youtu.be/oPeH68eDy6M

It seems that your code steals the sample rate from the INPUT device, and forces all devices (input and output) to use this sample rate, even if they're set to something else.   In this video (you'll see at the very end), my input device was the Built-in Input on my iMac, and the sample rate for that was set to 96khz. 

I don't think your audioDeviceManager class takes into account the default system audio device consisting of separate devices for the input and for the output.  


#20

I followed the code into AudioIODevice* createDevice()

line 1887: String combinedName (outputDeviceName.isEmpty() ? inputDeviceName : outputDeviceName);

I believe this is where the confusion is arising from.    You're pulling the sample rate for the new audio setup from the inputDevice, but you're naming the device based on the OutputDeviceName.  

For whatever reason, the lines 1881 and 1882 result in the same ID even though on my system, the input device is at index 1, and the output device is at index 2: 

 

http://s10.postimg.org/iqcda0qxx/Screen_Shot_2016_01_29_at_4_47_51_PM.jpg

 

I don't believe that my current configuration (IN: Built-in Input, OUT: 3rd party Audio Interface) should result in line 1890 being called.  Since they're two separate pieces of hardware, I believe that line 1903 should be called:

ScopedPointer<AudioIODeviceCombiner> combo( new AudioIODeviceCombiner( combinedName ) );

Following it further into juce_mac_CoreAudio.cpp, specifically the constructor for CoreAudioIODevice:

890:    if (outputDeviceId == 0 || outputDeviceId == inputDeviceId)
        {
            jassert (inputDeviceId != 0);
            device = new CoreAudioInternal (*this, inputDeviceId);
        }

recall that AudioIODevice* createDevice() determined that the outputDeviceID and inputDeviceID are identical on line 1881 and 1882l, so this is why it pulls the sample rate from the inputDevice (iMac Built-in Input). 

 

I then continued to execute the code and went into the dropbox to change Devices so that both the input and output were set to the Symphony I/O, and it resulted in the same inputDeviceID and outputDeviceID as when the Built-in Input was set as the Input:

 

 

So, I believe I have discovered some Unexpected Behavior, Jules.