Audio glitches with simple sine wave android Lenovo p2 but not on sim


#1

I’m trying to get audio working smoothly on Android with JUCE, I am however having a very bad time. My test, which is simply a single sinewave being played on the left channel without any GUI elements, runs smoothly on the simulator, but glitches on my Lenovo P2. The sound which is produced sounds exactly like a timed print statement holding up the audio callback. There are no prints in my application. If I press the home button and then switch to overview and repeat this process, there are no glitches at all, profiling shows 0% CPU on both my Lenovo and the sim.

Any thoughts? Any documentation or tutorials would also be welcome.

I am on android 7.0 (SDK 24).


#2

Running a Timer at a very short interval to call repaint() removes the glitches mostly. I am at a loss for why.


#3

I am now reading up on possible thread priority issues on Android. Still have no idea how to solve this. I tried this but that didn’t seem to help and I found that this was partially implemented already.

...
getWindow().addFlags (WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
// If no user input is received after about 3 seconds, the OS will lower the
// task's priority, so this timer forces it to be kept active.
...

This 3 second mark is approximately when I start perceiving the artifacts.


#4

This is a known problem on Android. Many Android devices have aggressive cpu frequency scaling when there is little load. When debugging audio glitches on Android always also look at the CPU frequency.

For ROLI’s NOISE app, we ensure that the audio callback always uses at least 80% audio callback time by spinning on NOPs. In fact, this is even the “semi-official” recommendation for audio apps on Android:

For more tips&tricks, you should really watch all of @Don_Turner’s excellent Google IO talk.


#5

This article might help: https://medium.com/@donturner/debugging-audio-glitches-on-android-ed10782f9c64

If not, feel free to post back here and I’ll see if I can help further.


#6

Thanks @fabian. That makes a lot of sense, could you elaborate on how you implemented this stabilizing load?
I’ve tried something ugly like spamming this:

const int32_t numSpins = someOtherNumber;
int32_t i = 0;
do { asm volatile( "nop\n\t" ); } while( ++i < numSpins );

I have a hunch that this is not what you mean. If it is, is there a smart way to acquire someOtherNumber?

Most of the application can do buffering in another thread so I no longer have a problem. Thanks so much @Don_Turner for linking your article, I have been using systrace heavily, I should have invested more time checking out the android toolchain.


#7

@Don_Turner does OBOE include some code to simplify this? Wouldn’t it be easier if you had it already called before your audio callback if needed.

@jammes I haven’t tried android for a while because it still feels premature compared to iOS (NOISE on my nexus 5x performs very poorly)
But the way suggested back then was to simulate touch on screen to avoid device throttling.


#8

Take a look at the following example: https://github.com/googlesamples/android-audio-high-performance/tree/master/SimpleSynth

It includes a “load stabilizer” class: https://github.com/googlesamples/android-audio-high-performance/blob/master/SimpleSynth/app/src/main/cpp/load_stabilizer.cc

It works as follows:

When you construct the load stabilizer you must specify:

#1 the callback period (i.e. the optimal delta between successive callbacks)
#2 the object which will do your actual audio rendering

You then call the load stabilizer render method inside your audio callback. The stabilizer will render your audio data, then attempt to keep spinning the CPU for a certain percentage of the callback period (specified by PERCENTAGE_OF_CALLBACK_TO_USE) including compensation if the callback started late. Personally I’ve found around 80% works best although YMMV.

Note that this is not really the “recommended” approach as it works against the CPU governor, however, the CPU governor is not designed to support real-time use cases so in some cases this is the only way you can achieve good latency with underrun protection.

Also worth mentioning that AAudio (on API 26+) has a better timing model and therefore has significantly less jitter on the audio callback than OpenSL ES.

does OBOE include some code to simplify this?

Not yet, but if you feel that this would be useful please file an issue: https://github.com/google/oboe/issues