I have been struggling to try to change the buffer size on any android device.
I initialize the device,
I get the current AudioDeviceSetup
I set the buffer size to one of the available device buffer sizes
I set the setup on the audioDeviceManager
On a pixel where the default buffer size is 94, this has no effect. Not a huge deal… 94 default is fine for my purposes anyway.
On a nexus where the default buffer size is a mind-boggling 1920, the app crashes on the call to ‘setAudioDeviceSetup’ (. (*Base::owner.engine)->CreateAudioPlayer(…) line 410 of juce_android_OpenSL.cpp)
I have a nexus 7 running 6.0.1 which is giving me a default frame size of 1920.
(I am actually considering not supporting it, because of this: Support USB Audio devices on Android)
I don’t know of any device where you can change the buffer size of the underlying audio device (we generally refer to this as a “burst” - a discrete chunk of audio frames which are read/written by the audio device in a single operation).
You can, however, change the software buffer size - rule of thumb is to use 2 bursts. On API 26+ this can be done dynamically on an audio stream, on <26 this can only be set before the audio stream is opened.
@jemur Nexus 7 is 6 years old! That’s a lifetime in phone terms! Yes, I wouldn’t support it if latency is critical to the user experience.
BTW in case it’s useful ROLI published the buffer sizes of many Android phones in their MAQ index: https://juce.com/maq
I’m not sure on the exact calculation used. For example, Pixel has a burst size of 192 frames which equates to 4ms @ 48kHz. The MAQ states 16ms latency which would indicate that their calculation (for Pixel at least) is 4 * burst size.
If that formula is consistent throughout the list then it could be a useful list of burst sizes for popular Android devices.
@jules Can you share any more info about the latency calculation formula?
I am using the below JUCE estimates for latency, and at least on the Pixel 2 they seem ok.
Definitely get that the Nexus is old… I literally have two devices to test on… the Pixel 2 and the Nexus 7… it was the best of times, it was the worst of times…
But it feels like a delay somewhere for more than half a second. That is, it is not comfortable to play notes at all, as it happens in the case of an instant response on iPhone/iPad.
does it make sense to bother with this? Perhaps the android developers will tell anything? Or is android still dead for audio? I also see that when I minimized app, the sound wheezes wildly and second it is almost impossible to make it work at a frequency of 48,000
Faced with another issue. Noticed that on Android prepareToPlay gives me 96 samples, 44100 - which is fine, that what I set.
But in the getNextAudioBlock the bufferToFill.buffer->getNumSamples() gives me 138 or even more size like 882. How this is possible? My app is crashing because my own buffers are smaller than the incoming data. Or I missed something and this is mixed with input channels? btw. the startSample always 0 at the moment.
I think on some mobile hardware the expected maximum block size is sometimes reported incorrectly by the system. It’s probably a good idea to program defensively, so that if the actual block size is larger than the expected block size, it is split up into smaller chunks which can be processed safely.
But I seem to be running into issue with android more and more. I use Pixel 4 and it seems app running so slow, or it’s snapdragon so slow, not sure. The wheezing starts almost immediately, while on the old iphone 10 it is only 45% of the cpu at maximum load with 4096 voices. On pixel I can play only 128-256 voices. I use the same optimizations OFast, fast-math, lto, etc.
When buffer 48000/96 - no sound at all and android is hang (only hard reboot )
When buffer 48000/1920 - I can play only 256 voices
When buffer 44100/1920 - I can play 300-512 voices
I tried to disable “OBOE” and it seems sound a little better, but still a lot of artifacts.
@ To people - How are things going with the load on the android? It is expected that the drop in power compared to Apple’s A11, A12x is about 16x times slower ?
p.s. by 4096 voices I mean - 16 voices x 4 osc x 16 unison x 4 oversampling (and everywhere I mostly use FloatVectorOperations)