So I am about to embark on a project with the following goal:
Enable the user to script audio synthesis/filtering algorithms in Python/Numpy and hear the results in real-time via a VST hosted in a DAW.
Think of it as Chuck or MAX/MSP wrapped in a VST (ala JUCE of course) but using Python/Numpy instead. Somehow I doubt I was the first one to think of doing something like this, so a question to the giants who came before:
Is it better to embed the Python interpreter in the plugin? or should I extend Python with a module that communicates (via IPC? ReWire?) with the plugin which would just act as a sort of sound server. Based on my research so far, it seems much easier to extend Python in that you don’t have to do any extra work to maintain the Python readline/event loop and so the user can use his own Python environment. However, I don’t have experience with low-latency audio over an IPC channel.