Problem with Plantronics headset with WASAPI


#1

I tested my Plantronics C610-M USB headset using the JUCE Demo Audio tab, with “Windows Audio” selected.

If I choose the headset for both input and output (the most common case), it pops the error:

“The input and output devices don’t share a common sample rate!” :cry:

Seems to work OK with “older” DirectSound selected…


#2

Yes… like all of Microsoft’s many feeble attempts at audio APIs, WASAPI is a bit crap. It doesn’t offer any syncing between input and output devices, nor does it let you open a device at a rate that it doesn’t natively support. So if you need a pair of input + output devices that don’t support a common rate, you’d currently have to open two separate AudioIODevice objects, and let them run independently.

This is a PITA for me too, as Tracktion uses WASAPI, so I’ll probably have to figure something out. What I need to write is a generic AudioIODeviceCombiner which can wrap a bunch of other AudioIODevices and sync them as one big device, but that’s not a trivial thing to do.


#3

[quote=“jules”]Yes… like all of Microsoft’s many feeble attempts at audio APIs, WASAPI is a bit crap. It doesn’t offer any syncing between input and output devices, nor does it let you open a device at a rate that it doesn’t natively support. So if you need a pair of input + output devices that don’t support a common rate, you’d currently have to open two separate AudioIODevice objects, and let them run independently.

This is a PITA for me too, as Tracktion uses WASAPI, so I’ll probably have to figure something out. What I need to write is a generic AudioIODeviceCombiner which can wrap a bunch of other AudioIODevices and sync them as one big device, but that’s not a trivial thing to do.[/quote]

Could you just allow the different rates and then resample to match? In our case, audio is always 16K on the wire, while capture tends to be much higher, so we normally resample whenever needed. In addition to that, many of the algorithms that we make use of will resample internally to match their own needs, so the stuff really gets resampled several times on the way through anyway - especially on the microphone side.


#4

Yes, resampling to match is exactly what I would write an “AudioIODeviceCombiner” class to do. It’d also need to be able to handle small sync drifts too, as these devices would have separate threads. Since it’d be no harder to write a general AudioIODeviceCombiner than a WASAPI-specific one, I’d rather do the general case, as I think IIRC it might also be needed for android audio.


#5

Hmm, there’s something here that I must just not be understanding. One of the first things that caught my eye was the general structure of this callback:

virtual void audioDeviceIOCallback
(const float** inputChannelData, int numInputChannels, float** outputChannelData, int numOutputChannels, int numSamples) = 0 ;

Where it’s sort of implied that the mic and speakers are being run together for a common effect. From other library examples, this actually seems quite unusual - as the two devices would normally be running independently. At fist I thought this looked interesting since the one important case (for us), where these flows would interact directly is when echo cancellation was being performed (speakers are being driven by remote mic across network, and can record back into the live mic on the local side).

Of course, JUCE is not supporting echo cancellation, so what is really going on here? What is the case, other than “open mic EC” where you need to run input/output together like this? And why would the sample rates ever need to match? I can see why this might be “desirable”, but I’m still missing why this would ever be a “requirement” (i.e. breaks if they cannot match)?


#6

Yes, most libraries run input and output in separate threads, but doing it with one callback is exactly what you need for things like audio sequencers - bear in mind that I originally wrote these classes as part of Tracktion.

ASIO handles its i/o this way too, as it’s also designed for pro-audio.


#7

I think I see - without totally understanding what that means - this is the juncture where the needs of pro-audio conflict with other use models. Feels a bit clunky, but I guess we could just run two AudioIODevice objects, one with just a mic and no speaker, and then one the other way around - should still perform well…

Thanks,
Gary


#8

I think I see - without totally understanding what that means - this is the juncture where the needs of pro-audio conflict with other use models. Feels a bit clunky, but I guess we could just run two AudioIODevice objects, one with just a mic and no speaker, and then one the other way around - should still perform well…

Thanks,
Gary[/quote]

Yeah, I just mean that in studio recording apps it’s critical to have your input and output in sync, because people are generally recording in sync with playback. So the ASIO/juce model makes far more sense, because if you do need synced audio, it provides it natively rather than forcing you to deal with syncing separate threads artificially in your app. And if you want your i/o separate then like you say, you can just choose whether which input or output channels you want to enable, and there’s no overhead from that.


#9

The problem of input and output devices having to have matching sampling rates affects my app also. It is for unsophisticated users, who will justifiably ask "Why can't I use my Trust webcam microphone? I don't have any other mic". Do I understand this right -- Instead of using AudioDeviceManager to initialise devices, I should create separate AudioIODevice objects for input and output, with the help of the AudioIODeviceType and AudioDeviceManager classes to enumerate what's available?


#10

Yes, that'd work. Or you can use multiple AudioDeviceManager objects and tell each one to only open certain devices.