Hi everyone,
I’m building an Android audio app using JUCE as the DSP engine (custom EQ / stereo width processing). I’m trying to figure out the correct architecture for real-time streaming audio processing on Android.My goal is to process streaming audio (HTTP streams / local playback) through JUCE DSP in real time, for example:streaming radio / MP3 streams
local media playback eventually app-based music player. The DSP is already implemented in JUCE (IIR-based EQ + additional effects).
From what I understand, the correct pipeline should be something like:Streaming source (ExoPlayer or decoder),PCM audio buffer (float), JNI bridge into C++, JUCE DSPProcessor (EQ / effects), AudioTrack output. What I’m unsure aboutis how to properly implement the streaming → JUCE connection.
Specifically:
What is the best way to extract PCM audio from Android streaming playback in real time? ExoPlayer AudioProcessor? AudioRecord loopback? custom decoder pipeline?
What is the correct way to feed buffers into JUCE without causing:audio dropouts,thread safety issues,sample rate mismatch
Should JUCE:
handle only DSP (recommended?)
or also handle decoding/streaming directly on Android?
Is AudioTrack the correct final output stage when using JUCE on Android?
JUCE used only for DSP (no UI audio handling yet)
Android app in Kotlin (Compose)
JNI bridge planned between Kotlin ↔ C++
Target: real-time low-latency audio processing
What I’m trying to achieve
A stable architecture for:
real-time streaming audio processing
low latency DSP (EQ + effects)
Android compatibility across devices
JUCE as core audio engine
Any guidance appreciated
I’m trying to avoid building something unstable or overcomplicated early, so I’d appreciate advice on the correct Android + JUCE audio pipeline architecture.
Thanks!
