I’m getting trouble in playing an audio file, because my device Manager don’t get the output device. I’m doing it like this:
but I always get the mic input (I know this by calling “getCurrentAudioDeviceName()”)… I specify 0 input channel and 2 output channel, so how is it possible? Thanx for your help.
nope, it gets the mic input. I just want to know if there is a way to get automatically the default output, without asking the user to choose a device from a device Selector or something. Thanx
Hmm. It just uses the first audio device in the list as the default - normally on macs that’s the duplex built-in audio device. Sounds like on your machine the input and output are split into two separate devices?
The nasty bit here is that you don’t know whether a device has got any output channels until you’ve opened it, so there’s not even a neat bit of logic that could be added to the audiodevicemanager… What I’ll look into doing is adding something to get the default device from CoreAudio itself…
I am using the JUCE AudioDemo and don´t want the user to allow setting the BufferSize. Instead the user should only be allowed to select anything else (like Audio-Device) when the Dialog for AudioDeviceSelectorComponent appears.
How to do this?
If I should do this with setAudioDevice():
How to let the user choose Audiodevices? (Restricting on Buffersize is okey.)
I rearranged some stuff in the audio device manager class. You can still change whatever you need, but it’s done with a structure now.
Can I ask why you’d possibly want to stop people changing the buffer size? If they have a device that doesn’t work correctly at the default size, they’d be screwed!
The processing is done via a method of another SDK, which needs a (fixed) outputBufferSize, and depending on this, it demands a fixed inputBufferSize. I use my own AudioSource-Class (a modified AudioFormatReaderSource)
In getNextAudioBlock I read this amount of neededSamples via AudioFormatReader and
copy the samples from AudioSampleBuffer into my own Buffer ppInputData (Array of Float-Pointer).
These were processed. It returns an Array of Float-Pointer ppOutputData with this fixed outputBufferSize.
Then I copy the result sample from ppOutputData to info.buffer.
Here, the device Callback will read with the settings of Device-Buffer-Size. If the user changes this, my Processing-Method would not handle is properly.
This is a good reason, to restrict ton e.g 512 Sampls, isn´t it?
Thanks for help!
Hope you’ve not wasted too much time because of that assumption.
When any audio settings change, your audio device will be stopped and restarted, and so will any audio sources that it’s playing. So in your prepareToPlay methods you’ll be able to reinitialise any fixed structures with the new buffer size before it starts running again. Since any audio source is required to be able to stop and start like that, the fact that it’s a user changing the sample rate is irrelevant.
(Unless of course you try to do something hacky like hard-coding everything for a particular buffer size. In which case, good luck with that!)