I want to unit test my Juce implementations of algorithms against my Python prototypes.
So far, I have used “static” test setups in my Juce Unit Tests, where I pre-generate results from Python using the input argument values, tested in the unit test. This works, but it is somewhat limited in the test coverage.
I envision that I can randomize ("within reason”) the parameters and inputs to my algorithms and run both the Python prototype and my Juce implementation in parallel and compare the results - ideally in the Juce unit test framework.
One way to do this could be to call a command line Python script that returns the results in a file, readable by the Unit Test framework. However, I am hoping for a more elegant solution…
Does anyone have experience with or suggestions to this?
An alternative solution, testing the Juce from Python against the Python code is also an option.
Interesting. Thank you for bringing it to my attention. This could be useful in another context.
However, I fail to understand how this will allow me to directly compare the outputs of my C++ and Python implementations of a single algorithm. Am I missing a point?
Whatever turns up, I will probably look into pedalboard anyway as it may open some interesting possibilities. Thanks again for bringing it to my attention
Have you considered something like pybind? that way using python bindings you could create a python module that wraps up your JUCE code that you can call directly from python. Alternatively I think you could probably go the other way, writing some C++ wrapper around your python code also using pybind.
I don’t think it makes much sense to call the python code from C++, that’s probably more trouble than it’s worth. It’s also not necessary to implement an entire plugin just for this test. If I were doing this, I would write a simple C++ executable (just for this test) that would do the processing and write the output to a file. Then the main test would be the Python script, it would call this executable, load the output file and compare the results to the Python prototype.
Tests like this are easily managed with cmake/ctest. To get the path of the test executable to the python script, either pass it as an argument or configure the Python script via file(GENERATE) using the generator expression $<TARGET_FILE:the_test_executable>
Yes, using pybind is definitely an option. I have previously worked with that, and I recall that it was quite cumbersome, so I was in all honesty hoping for an alternative solution
This sounds to me like a fairly simple and pragmatic way of doing it. The approach that you describe may very well prove to be the simplest - at least on short term. Thank you.