This is an example on how quickly it is to prototype apps with popsicle (in plain JUCE idiomatic style) and access easily an extended set of libraries out there. This demo shows a background thread capturing from the video camera and applying face tracking (using OpenCV) then queue back the detected rectangle to JUCE that is painting them in the paint method. The whole example is around 150 lines of code, and iterating it is very quickly.
Very cool.
I imagine this would not be recommended for a production app/plugin?
Looks great for quick and fun prototyping.
It can be easily embedded in your existing plugin/app, and you can choose what to use. For example you could have your plugin working in c++ for the audio/model (value trees) but yor UI layer is completely assembled in python, ensuring you can iterate your interface quickly just by reloading your python scripts.
Another possibility is to allow extensions with UI to be written in python, much like what Maya supports (in that case using PySide because Maya is written in Qt, for example this collection of scripts).
I mean, if you will do JUCE UIs with web views, why wouldnβt you be able to do it effectively with python ?
Definately, and you can even reach great performance as well by using tricks (like integrating with Cython or nimpy), check this out:
nim_audio_example.py
import nimporter
import nim_audio # Here we import the compiled nim
import numpy as np
from juce_init import START_JUCE_COMPONENT
import popsicle as juce
class AudioCallback(juce.AudioIODeviceCallback):
gain = 1.0
time = 0.0
device = None
def audioDeviceAboutToStart(self, device: juce.AudioIODevice):
print("starting", device, "at", device.getCurrentSampleRate())
self.device = device
def audioDeviceIOCallbackWithContext(self, inputs, numInputChannels, outputs, numOutputChannels, numSamples, context):
time = self.time
for output in outputs:
nout = np.array(output, copy=False)
time = nim_audio.process_output(nout.data, numSamples, self.gain, self.time)
self.time = time
def audioDeviceError(self, errorMessage: str):
print("error", errorMessage)
def audioDeviceStopped(self):
print("stopping")
class MainContentComponent(juce.Component):
manager = juce.AudioDeviceManager()
audio_callback = AudioCallback()
def __init__(self):
juce.Component.__init__(self)
self.manager.addAudioCallback(self.audio_callback)
result = self.manager.initialiseWithDefaultDevices(0, 2)
if result:
print(result)
self.button = juce.TextButton("Silence!")
self.addAndMakeVisible(self.button)
self.button.onStateChange = lambda: self.onButtonStateChange()
self.setSize(600, 400)
self.setOpaque(True)
def visibilityChanged(self):
if not self.isVisible() and self.manager:
self.manager.removeAudioCallback(self.audio_callback)
self.manager.closeAudioDevice()
def onButtonStateChange(self):
if self.button.getState() == juce.Button.ButtonState.buttonDown:
self.audio_callback.gain = 0.25
else:
self.audio_callback.gain = 1.0
def paint(self, g: juce.Graphics):
g.fillAll(juce.Colours.black)
def resized(self):
bounds = self.getLocalBounds()
self.button.setBounds(bounds.reduced(100))
if __name__ == "__main__":
START_JUCE_COMPONENT(MainContentComponent, name="Audio Device Example")
And this is the DSP part, which is written in Nim and compiled to a binary python dependency on import from python side, allowing to reach great performance (and the ability to be hot swapped):
nim_audio.nim
import nimpy
import nimpy/raw_buffers
import std/[math, random]
proc `+`[T](a: ptr T, b: int): ptr T =
cast[ptr T](cast[uint](a) + cast[uint](b * a[].sizeof))
proc process_output(a: PyObject, numSamples: int, gain: float, t: float): float {.exportpy.} =
var buffer: RawPyBuffer
a.getBuffer(buffer, PyBUF_WRITABLE or PyBUF_ND)
var p = cast[ptr float32](buffer.buf)
var time = t
for i in 0 ..< numSamples:
p[] = (time.degToRad().sin() + (rand(2.0) - 1.0) * 0.125) * 0.5 * gain
p = p + 1
time += 2.0
buffer.release()
return time
In this example i went further and allowed the inner loops to be compiled as well (in nim for this example, mainly because of the great nimpy
(GitHub - yglukhov/nimpy: Nim - Python bridge) and nimporter
(GitHub - Pebaz/nimporter: Compile Nim Extensions for Python On Import!) facilities but pretty much can be any other compiled language with python integration).
As you can see i made JUCE input output channel buffers (the ones feeding the audioDeviceIOCallbackWithContext
) compatible with the buffer protocol of numpy
so they can be fed into no-copy numpy arrays and manipulated efficiently (nout = np.array(output, copy=False)
). Iβve done the same for the juce::AudioBuffer<>
class, so interoperability is awesome.
As an example of how do number crunching and audio gen efficiently in numpy:
def audioDeviceIOCallbackWithContext(self, inputs, numInputChannels, outputs, numOutputChannels, numSamples, context):
start = self.time
end = start + 2.0 * numSamples
self.buffer[:] = (
((np.random.random(numSamples) * 2.0 - 1.0) * 0.025) + np.sin(np.deg2rad(np.linspace(start, end, numSamples)))
) * self.gain
self.time = end % 360.0
for output in outputs:
nout = np.array(output, copy=False)
nout[:] = self.buffer
This is fast, not as fast as what you can reach with C++ only, but usable definately and with uncomparable flexibility. And could be even optimised more (like compiling the inner loops as i shown before).
I have several other plans for better coexistence and performance.
This is just the beginning.
Super excited to see this up and running again! Looking forward to playing with this.
Is there an example of how you would embedded this in a Juce c++ app ?
Thanks !
Personally this is a much more appealing way of working on UI and behaviour than the web / JS route. Especially if it opens up leveraging other libraries as well. Together with @sudara 's melatonin_inspector module this could be a very efficient way to develop JUCE UIβ¦
Iβm still working on the documentation for both python and embedding usage.
A very slim down example might look like this, you add juce_python
as a module in your app and link in Python as well:
set (Python_USE_STATIC_LIBS TRUE)
find_package (Python REQUIRED Development.Embed)
target_link_libraries (${TARGET_NAME} PRIVATE
juce::juce_audio_basics
...
juce::juce_recommended_config_flags
juce::juce_recommended_warning_flags
Python::Python
popsicle::juce_python
popsicle::juce_python_recommended_warning_flags)
Then in your app
// Header
class PopsicleDemo : public juce::Component
{
public:
PopsicleDemo();
~PopsicleDemo() override;
void paint (juce::Graphics& g) override;
void resized() override;
void customMethod(const juce::String& xyz);
private:
popsicle::ScriptEngine engine;
};
// Implementation
PYBIND11_EMBEDDED_MODULE(my_great_py_module, m)
{
namespace py = pybind11;
py::module_::import (popsicle::PythonModuleName);
py::class_<PopsicleDemo, juce::Component> (m, "PopsicleDemo")
.def ("customMethod", &PopsicleDemo:: customMethod)
// more custom methods/properties here
;
}
PopsicleDemo::PopsicleDemo()
{
pybind11::dict locals;
locals["my_great_py_module"] = pybind11::module_::import ("my_great_py_module");
// if you also want to access full juce by implicitly importing in the script, uncomment
// locals["juce"] = pybind11::module_::import (popsicle::PythonModuleName);
locals["demo"] = pybind11::cast (this);
engine.runScript (R"(
# An example of scriptable self
print("Scripting an existing JUCE app!")
demo.customMethod("Popsicle!")
demo.setOpaque(True)
demo.setSize(600, 300)
demo.addToDesktop()
)", locals);
}
Then you expose any custom stuff you have making pybind11 aware of their relationship with juce classes, lot of the basics are already present and more stuff is added constantly.
Hello! I had 10 minutes free so i hacked around an emoji text component using some python utilities.
Can be used like this (obviously from popsicle!):
class ExampleComponent(juce.Component):
def __init__(self):
juce.Component.__init__(self)
self.emoji_one = EmojiComponent()
self.emoji_one.setFont(juce.Font(16.0))
self.emoji_one.setColour(juce.Colours.white)
self.emoji_one.setText(dedent("""
I π΄οΈ 100% πΆ agree π― that ππππ this automated π§ generator does π©β𦲠NOT π―π―π― provide π the same π―
quality π as hand π crafted emoji π€ pasta. π But π₯ I π€ think π€ there's π something ββ cool π§
about π being π able πͺπͺ to take π a 10,000 word π wikipedia π» article π and instantly add π
emojis π
π
π’π¦
π¦
π¦π
""").strip())
self.addAndMakeVisible(self.emoji_one)
self.slider = juce.Slider()
self.slider.setRange(1.0, 100, 0.1)
self.slider.setValue(16.0)
self.slider.onValueChange = lambda: self.emoji_one.setFont(juce.Font(self.slider.getValue()))
self.addAndMakeVisible(self.slider)
self.setOpaque(True)
self.setSize(600, 400)
def paint(self, g: juce.Graphics):
g.fillAll(self.findColour(juce.DocumentWindow.backgroundColourId, True))
def resized(self):
bounds = self.getLocalBounds()
self.emoji_one.setBounds(bounds)
self.slider.setBounds(bounds.removeFromBottom(20))
if __name__ == "__main__":
START_JUCE_COMPONENT(ExampleComponent, name="Emoji Example")
Enjoy !
Very cool, whatβs being used to render the text/emojis?
That was a quick test i hacked around in between meetings. Iβm splitting the text and emojis, and use a CDN service to grab images loaded as juce::Image and cache them locally to speedup relaunches. Along the lines of what pilmoji library is doing.
I will play around with color fonts later (they will be rendered using Pillow) now that itβs easy to interoperate between juce Image and Pillow Image popsicle/examples/pil_image.py at dev/small_updates Β· kunitoki/popsicle Β· GitHub
What would be super helpful for me would be a tutorial or example of how to use this within an existing JUCE project.
For example I have a plugin where the GUI needs some layout adjustments or a new view or component. It would be great to be able to get the hot-reloading advantage with popsicle, to prototype the new or updated GUI code, and then once itβs done, port the python code to C++ (if required).
This is something planned and i already started to layout the docs for it, but also iβm trying to evolve the demos (there will be more standalone and plugin examples). You can see an example of the embedding guide iβm starting here or you can follow the embedding demo app here.
The usage of the module in existing juce applications/plugins is rather new and will require some more iterations.
Iβm going though quickstart guide and running into this issue where popsicle module in not found.
pip3 install popsicle
Requirement already satisfied: popsicle in /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages (0.9.5)
python first_app.py 1 2 3 "test"
File "first_app.py", line 1, in <module>
import popsicle as juce
ImportError: No module named popsicle
When Iβm attempting to run any of the examples Iβm getting errors like:
python hotreload_main.py
File "hotreload_main.py", line 13
fileToWatch = juce.File(os.path.abspath(__file__)).getSiblingFile(f"{moduleName}.py")
^
SyntaxError: invalid syntax
What am I doing wrong?
You have multiple pythons installed in your system and you installed popsicle on python3 (using pip3) while if you execute python it will select your python2 installation. Itβs good to just keep everything pointing to the same python install, so itβs better if you have python
and python3
pointing to the same version, but it could not be the case in your machine if you have a misconfigured installation.
This is easy to verify, try issuing:
python --version
and
python3 --version
Use python3
instead of python
to launch examples if you used pip3
to install popsicle.
Thanks! That worked, indeed I have both python 2 and 3 installed.
Popsicle looks impressive!
@anthony-nicholls iβve modified the previous example to render the NotoColorEmoji.ttf
instead of the hack of downloading the emojis from the CDN (plus i added word/emoji wrapping)
Here we go, webgpu integrated in popsicle ! This allows to have a juce app around a viewport implemented as heavyweight webgpu target (thanks to wgpu-native).
Check out the code here
I have been playing a bit with webgpu and i would say that it came a long way ! And considering itβs being used by chromium as main backend (using Dawn) it wouldnβt be a totally bad idea to make JUCE use it as backend and be able to target Metal, Vulkan, WebGPU (when compiling with emscripten for the browser) and OpenGL/ES. Graphics libraries like ThorVG can target webgpu nowadays so we could get high performing primitives rendering in GPU without the software rasterisation step that now juce is doing when rendering in OpenGL. The future is now.