Getting the System's Default Audio Device

Now that I know that your AudioDeviceManager::initialiseWithDefaultDevices(int,int) builds the hardware setup based on the input device's settings regardless of if the output device is set to something different, I'm sharing the code that i use to build a custom AudioDeviceSetup that is based on the system output device's settings

here's the code to set it up:

        void setupDefaultAudioDevice() {
/* 
             this is for some unexpected behavior on OS X, where JUCE tries to build an 
             audioDeviceManager setup based on the Default System Input's settings.  if your output 
             device is not your input device, JUCE will try to reclock your output device to the input 
             device's settings.  this corrects that and initializes the AudioDeviceManager with the output device's settings.     
*/
            juce::AudioDeviceManager::AudioDeviceSetup defaultDeviceSetup;
#ifdef JUCE_MAC
            int inChan = 1;
            int outChan = 1;

            Float64 sampleRate;
            ChordieApp::MacAudioUtilities::GetDefaultOutputDeviceSampleRate(&sampleRate);
            CFStringRef deviceName;
            ChordieApp::MacAudioUtilities::GetDefaultOutputDeviceName( &deviceName);

            CFShow( deviceName );
            String name = String( CFStringGetCStringPtr(deviceName, kCFStringEncodingMacRoman) );

            UInt32 bufferSize;
            ChordieApp::MacAudioUtilities::GetDefaultOutputDeviceBufferSize( &bufferSize);

            defaultDeviceSetup.sampleRate = sampleRate;
            defaultDeviceSetup.outputDeviceName = name;
            defaultDeviceSetup.inputDeviceName = name;
            defaultDeviceSetup.bufferSize = bufferSize;

            audioDeviceManager->initialise(inChan, outChan, nullptr, false, name, &defaultDeviceSetup );
#elif JUCE_WINDOWS
            audioDeviceManager->initialiseWithDefaultDevices(1, 1);
#endif
        }

and here is the utility class that does the hard work: 

#ifndef MACAUDIOUTILITIES_H_INCLUDED
#define MACAUDIOUTILITIES_H_INCLUDED

#include <AudioToolbox/AudioToolbox.h>

#include "../JuceLibraryCode/JuceHeader.h"

//==============================================================================
/*
*/

namespace ChordieApp {

    class MacAudioUtilities 
    {
public:
        MacAudioUtilities() {}
        ~MacAudioUtilities() {}

        static OSStatus GetDefaultOutputDeviceSampleRate(Float64 *outSampleRate) {
            OSStatus error;
            AudioDeviceID deviceID = 0;
            AudioObjectPropertyAddress propertyAddress;
            UInt32 propertySize;
            //
            propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(AudioDeviceID);
           //
            error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                &deviceID);
            if( error) return error;
            //
            propertyAddress.mSelector = kAudioDevicePropertyNominalSampleRate;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(Float64);
            //gets property( nominal sample rate)
            error = AudioHardwareServiceGetPropertyData(deviceID,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                outSampleRate);
            return error;
        }

       static OSStatus GetDefaultOutputDeviceName(CFStringRef *name) {
            OSStatus error;
            AudioDeviceID deviceID = 0;
            AudioObjectPropertyAddress propertyAddress;
            UInt32 propertySize;
            //sets which property to check
            propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(AudioDeviceID);
            //gets property (system output device)
            error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                &deviceID);
            if( error) return error;    //we couldn't get the default system device
            //sets which property to check
            propertyAddress.mSelector = kAudioObjectPropertyName;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(String);
            //gets property (name)
            error = AudioHardwareServiceGetPropertyData(deviceID,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                name);
            return error;
        }

       static OSStatus GetDefaultOutputDeviceBufferSize(UInt32 *bufferSize) {
            OSStatus error;
            AudioDeviceID deviceID = 0;
            AudioObjectPropertyAddress propertyAddress;
            UInt32 propertySize;
            //sets which property to check
            propertyAddress.mSelector = kAudioHardwarePropertyDefaultSystemOutputDevice;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(AudioDeviceID);
            //gets property (system output device)
            error = AudioHardwareServiceGetPropertyData( kAudioObjectSystemObject,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                &deviceID);
            if( error) return error;    //we couldn't get the default system device
            //sets which property to check
            propertyAddress.mSelector = kAudioDevicePropertyBufferFrameSize;
            propertyAddress.mScope = kAudioObjectPropertyScopeGlobal;
            propertyAddress.mElement = 0;
            propertySize = sizeof(bufferSize);  //UInt32
            //gets property (bufferSize)
            error = AudioHardwareServiceGetPropertyData(deviceID,
                &propertyAddress,
                0,
                nullptr,
                &propertySize,
                bufferSize);
            return error;
        }

private:
        JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR(MacAudioUtilities)
    };

}//end namespace

#endif  // MACAUDIOUTILITIES_H_INCLUDED

 

I got a little lost in the whole exchange here, so I'm asking if either matkatmusic or jules could summarize:

1) does all this boils down to JUCE allowing input and output to be on two different audio devices which, in turn, could have two different sample rate settings?

2) did some of this discussion turn into some actual code update into the JUCE codebase?

 

Seems like a good summary, it's not something that affect anyone who doesn't have two separate devices with different rates.

We've not changed the codebase yet, the suggested workaround wasn't really the right approach. Would need to think about it more.

CoreAudio on OS X allows you to have separate devices for input and output, as well as have them run at separate sample rates.  

I think Jules and co need to create a CombinedAudioDevice that mimics this. 

 

This is how the situation applies for me.   All audio exits my computer thru my Apogee Symphony I/O.   When I do facetime/google handouts/skype, my input source is either the Built-in Mic, or the Built-in Line Input on the mac.  

I know that in Logic you can use separate devices for Input and output.  I'm not sure if it reclocks either device.  It would be worth investigating.   For me, I chose the output device as the device that my application should grab the sample rate from, as i am more likely to be playing back sounds vs. capturing sounds using the built-in mic/line input. 

 

I believe initializeWithDefaultDevices() should be changed so that it allows separate sample rates for both the input and output device in the rare situation that they are not the same device.  This is what this thread is about. 

Jules, did you read this post:  http://www.juce.com/comment/319505#comment-319505 

I found all the parts in your code that cause this unexpected behavior, so it should be pretty easy for you guys to create a fix now.  

Yeah, saw it. Will look asap.

I don't think any of this actually is unexpected behaviour..

If you were to ask for a default audio setup with no input channels, then it'll only use output devices, so I'd claim it should work the way you expect it to.

But if you ask for a setup with both input and output channels, then yes, it'll deliberately try to choose the input device's sample rate, because if you're both recording and playing back simultaneously, it's better to force the output to change its rate to match the input rate, so that there's less chance that the input device will have to resample and degrade the quality of its data.

Not sure where you were going with all that rambling CoreAudio utility code, it seems like a lot of work compared to just

deviceManager.initialiseWithDefaultDevices (0, 2);
double rate = deviceManager.getCurrentAudioDevice()->getCurrentSampleRate()

I'll experiment with that.   Perhaps you can add that to the documentation what happens if you specify 0 for either parameter, as it is really quite under-documented.  Hence, "unexpected behavior"

http://www.juce.com/doc/classAudioDeviceManager#aed646db44c76466d8d0e288d56e50691