Is there a way to do audio output only? ("The input and output devices don't share a common sample rate!")

I am trying to do audio out only, so I am asking the AudioDeviceManager for 0 input channels and 2 output channels, and not supplying the input device name. But when I select an output that differs from the sample rate of the default input, I am getting the "The input and output devices don’t share a common sample rate! error. Is there any way around this?

        AudioDeviceManager::AudioDeviceSetup audioDeviceSetup;
        audioDeviceSetup.outputDeviceName = deviceName;
        audioDeviceSetup.useDefaultInputChannels = false;
        audioError = audioDeviceManager.initialise (0, 2, nullptr, true, {}, &audioDeviceSetup);

I’ve been getting a similar problem. Its because I have an external web cam with a built in microphone. And whenever this web cam is plugged in, my app cannot initialise the audio. Unfortunately, the JUCE code for audio initialisation is just not very robust and fails under very common circumstances such as our cases. I wish they would invest a bit of resources into getting this right because it is a very difficult area for us developers to address on our own. Its a massive problem.

1 Like

@cpr This commit should ensure that the default device isn’t opened if no channels are requested - can you see if it fixes the issue for you?

@DrTarantism Can you be more specific about the problems that you’re having? We can’t invest resources into problems that we’re not aware of.

The problem is that there is a bit of a logical flaw in the assumptions that went into the initialisation methods. The webcam in my laptop has very low quality, so I bought a HD web cam that is USB based. It has a built in microphone, but obviously does not have any speakers. The operating system automatically defaulted to using the web cam as an input.

And then when I started my app, I was shocked to find that the start-up process had bailed on me because JUCE realised that there were no matching sampling rates between the microphone and speakers. So it never initialised the audio at all.

I wish that JUCE would have its own contingency in times like these, and just go with the one device if the two can’t be used together. But instead, JUCE just gives up and returns a text message that I cannot easily write code to automatically respond to. It becomes an issue where I can’t easily create contingencies for my end users.

I guess that’s more of a design choice - the AudioDeviceManager will return an error if the specified devices can’t be opened for any reason instead of trying to fallback to some sort of compromise like you suggested. Although it’s not ideal for your use case, I think it’s clearer from a user perspective and reduces complexity and ambiguity as there are any number of things that it could do if it fails to open the explicitly requested device setup.

It’s certainly possible for you to write your own fallback code though. Instead of using the initialise() or initialiseWithDefaultDevices() methods you can iterate over the available audio device objects yourself and query their properties like sample rate, buffer size etc. and then decide what your app should do to minimise the user issues. Take a look at the code sample here for an example of how to do this.

@ed95 That works like a champ! Thank you so much!

1 Like

This completely freaked me out after upgrading to 5.4.3 last week, suddenly StandAlone synth plugin was turning on the web-cam LED, I thought my machine had been pwned, until I eventually realised that it switching on and off was directly correlated with starting/stopping app, and rolling back JUCE to 5.4.1 stopped the issue completely, so I could finally take off my tin-foil hat safe in the knowledge GCHQ wasn’t watching me code :joy:

Good to know that it’s fixable on 5.4.3

Hi,

I have a problem in my iOS app, where I’m setting 0 input channels in the device manager init, yet I’m asked every time to accept that the app is trying to access the microphone.

Ed, I see you have just made a commit for that (Don’t use default input/output device names when setting up an AudioDeviceManager if no channels have been requested), but the problem remains.

This is because useDefaultInputChannels remains initialised at true. Do I need to create a specific AudioDeviceSetup, or can I just expect that if I pass 0 input channels, it should allocate 0 input channels?

Thanks.

Mariano

On iOS it should only request access to the mic if you’ve set the Microphone Access option in the Projucer exporter to be enabled. If you don’t need input channels then you should leave this disabled.

Thanks for the feedback, but the Projucer shows Default (Disabled) for Microphone Access. Should I set it specifically to Disabled?

I’m not seeing this, here’s what I’ve done to test:

  • Create a new “Audio Application” project in the Projucer and add an iOS exporter
  • Modify line 38 of MainComponent.h to not require any input channels - setAudioChannels (0, 2);
  • Build and run the app on an iOS device (I’ve tested on an iPad Air 2 running iOS 12)

It doesn’t ask for mic permissions.

Thanks for the feedback. I was able to make my app work on the device, but the problem remains on the Simulator, even with a brand new app generated with the 5.4.3 Projucer.

I set setAudioChannels (0, 2) as requested (even though it’s on line 28 of MainComponent.cpp), but the Mic message appears every time a new build is run on the Simulator (iPhone XR).

The message gets triggered at the moment we AudioUnitInitialize(audioUnit) in juce_ios_Audio.cpp. Here is the code:

{
    AudioStreamBasicDescription format;
    zerostruct (format);
    format.mSampleRate = sampleRate;
    format.mFormatID = kAudioFormatLinearPCM;
    format.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;
    format.mBitsPerChannel = 8 * sizeof (float);
    format.mFramesPerPacket = 1;
    format.mChannelsPerFrame = (UInt32) jmax (channelData.inputs->numHardwareChannels, channelData.outputs->numHardwareChannels);
    format.mBytesPerFrame = format.mBytesPerPacket = sizeof (float);

    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,  0, &format, sizeof (format));
    AudioUnitSetProperty (audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &format, sizeof (format));
}

AudioUnitInitialize (audioUnit);

I wonder if it has anything to do with format.mChannelsPerFrame being set to the max nr of channels, even for input.

No, that code won’t isn’t causing the permissions dialog to show, the dialog is shown when you have a plist setting requesting microphone access which will only be added to the project if you have enabled the mic access setting in the Projucer.

I’m still not seeing the permissions request when running in the simulator. Are you sure it’s not a macOS permissions request?

Are you sure it’s not a macOS permissions request?

Could very well be, actually. It appears as a separate window outside the simulated device.

Apparently it was a Simulator bug. It has been fixed in XCode 10.2:

You’re now only prompted once to authorize microphone access to all simulator devices. (45715977)

Thanks for the help.

Hit the same issue - only occurred when I upgraded to 5.4.3 and has taken me a bit of time to track down and figure out what had changed that broke the runtime behaviour of the app. The commit from @cpr above fixed the issue straight away, and logically this seems like the appropriate solution. Self-management of audio resources is a sledgehammer solution.

I’m not sure what your point is here. Are you saying that this commit has fixed the issue for you?

Yes - this did fix the issue.

Cheers

Don

So if I’m reading this correctly, “fixed in 5.4.4” is the ultimate resolution of this bug, yes? (The linked commit in question, the one which multiple users have reported success with, was included in the 5.4.4 release tag AFAICT.)

I’m also running into the mismatched sample rates on a single headset with in/out (Plantronics speaker/mic) on windows. This like week 2 for me and I’m mucking with the demo code, so pardon the noob understanding.

What is going on with virtually every other app in the commercial space I use this headset for? Are they accepting and managing the different sample rates or are they all using directsound to abstract it away?