iOS audio latency


#1

Hi all,

Just built my standalone plugin and ran it in the ipad simulator. In the simulator, I cannot reduce the buffer size below 1024 samples. I have not tested this in the hardware yet (because I haven’t thrown the loot at apple yet). My question is, how low can we go in iOS in terms of samples and round-trip latency in ms?

Ta muchly in advance


#2

The block size and sample rate in the simulator seem heavily dependent on the host Mac’s audio settings. I seem to remember being able to do 512 samples on the device although obviously with higher CPU usage. You can get weird thing like the simulator forcing weird buffer sizes (e.g., 471 samples where there is a 48kHz vs 44.1kHz sample rate mismatch).


#3

Thanks for the response.

I was hoping that I could go substantially lower than that :? . I have been looking around and have seen that some folks have achieved very low latency a couple of years back using RemoteIO http://www.patokeefe.com/archives/230 . Can anyone give me an indicator of the lowest possible buffersize on iOS harware using Juce audio wrappers to pump a signal from the mic straight through to the output? If it is not possible to achieve a buffer size in the region 96 samples at 48 kHz then I need to have a serious rethink before investing any more time on my pet project.

Thanks again.


#4

Juce also uses RemoteIO.

Seems a bit far-fetched though - I couldn’t get anything as low as that out of my iPhone 3GS. Maybe 128 samples, as long as you don’t actually do any significant processing.


#5

Jules - is there any standard “iOS 3 vs iOS4” definition in Juce? I looked, but nothing stood out, and this could help with the audio latency issue discussed in this thread.

The reason I ask is that in iOS 4 you can use the vDSP framework, and I’ve found that on intel hardware it can be 30%-200% faster than even hand-coded assembly. I’ve got benchmarks for general matrix multiply comparing vecLib with http://eigen.tuxfamily.org/, and vDSP with hand-coded-NEON assembler from FFmpeg/LibAV. Yes, believe it or not, vDSP is almost twice as fast as the hand-coded assembler… for the FFTs!

Where this may come up is in the audio S1.15 interger coding to and from floating point. Since this function is called so often, it would be really nice to use vDSP_vflt16 (converts an array of signed 16-bit integers to single-precision floating-point values) and vDSP_vfix16/vDSP_vfixr16 rather than hand-coded multiplies of 32768 as is current in the iOS source.

From my (albiet limited) experience, gcc generates horrible ARM code, so using the vDSP libraries could save precious microseconds, especially in a tight loop like the audio callbacks.

Yes - I’m volunteering to develop/test patches, but I need to know about iOS3 vs iOS4 in Juce… :smiley:


#6

I haven’t got a JUCE_xx macro for that, but you can always just use the standard OSX definitions, which will be available.

A project that I’ve always thought would be cool (but never had time to do, of course!) would be a set of cross-platform DSP functions that’d directly call things like vDSP where available, but with fallback versions when it isn’t.


#7

Thanks for the responses folks. I guess I’ll just jump in with both feet, register with iOS dev, and see what I can squeeze out of the hardware. :idea:


#8

[quote=“jules”]I haven’t got a JUCE_xx macro for that, but you can always just use the standard OSX definitions, which will be available.

A project that I’ve always thought would be cool (but never had time to do, of course!) would be a set of cross-platform DSP functions that’d directly call things like vDSP where available, but with fallback versions when it isn’t.[/quote]

I do something similar for my DSP code. I have my own vector library, which calls vDSP on OSX (with some C code for a few functions that are broken in the PPC vDSP), SSE/SSE2 intrinsics for VisualStudio, and straight C for writing and testing the code. Right now, I tend to work with signal vectors that are SIMD in time, as opposed to parallel operators - i.e. a block of N samples for a single filter, instead of N filters in parallel.

Sean Costello


#9

I have one question related to the latency configuration on iOS.

I’m trying to change it dynamicaly on iOS.

I’m requesting the current bufferSize with the method AudioDeviceManager::getAudioDeviceSetup and try to change it with the AudioDeviceManager::setAudioDeviceSetup but it doesn’t seem to have any effect. In the debugger, the setAudioDeviceSetup method triggers the “changeMessage” method that seems consistent with the change of the bufferSize but I can’t go further.

For the moment, I setup my bufferSize directly in the juce code changing the getDefaultBufferSize returned value in the iOS juce dependent code.

Any clue ?


#10

Doesn’t the juce demo correctly let you set the latency?


#11

I don’t use juce GUI for my App however I searched a bit further and it seems the only available buffer value for ios is 1024.
I changed the list of possible values in the method “int getBufferSizeSamples (int index)” and also the number of values in “getNumBufferSizesAvailable()” in the file juce_ios_Audio.cpp.

I can set now different latency values with the method “setAudioDeviceSetup”. However the app crashes when I increase the bufferValue. I’ll search further later.