Machine Learning- where to place training and testing functions


#1

I’ve built some functions for training and testing a Machine Learning model, and am thinking about the best place to run these methods. I was thinking of placing them in the paint() method, as I have concerns about running them on the audio thread. The training is essentially pushing numbers into a vector when “recording”, then measuring these numbers against another vector when “testing.”

At the moment, I’m running the audio through Max MSP and am only using JUCE for classification, but will be moving everything to JUCE later, so want to make sure I’m running these functions in the right place. Does anyone have any thoughts on this? Thanks for any help.


#2

Putting the calls into the paint() method only makes sense if your intention is to immediately also actually draw the data. But even then, you might not really want to do it there, as the paint() method might or might not be called when you expect it to be called. (So you might end up doing more or less work than desired.)

It’s really hard to answer anything precise because you haven’t explained how your application/plugin works to begin with.


#3

Thanks for your reply- I was vague in an attempt to keep it as simple as possible.

I have two metrics that I’m sending to JUCE from Max MSP via OSC- to keep it general let’s just say it’s an x and y position from the mouse.

I’m using a machine learning library (RapidLib) where I can record training examples of a gesture, let’s say moving in a circle is gesture one, and moving it from left to right is gesture two. The “recording” process is taking the x and y positions and pushing them into a vector.

Once the recording process is finished, I can then continuously evaluate which gesture is the closest (kind of like Kinect) by comparing the current and past (let’s say 30) values with the gestures that I’ve already recorded in my training vector.

So everything needs to be as accurate and responsive as possible, from the time I hit “record,” to the time I stop the gesture, to the testing phase of the application.


#4

OK, I can’t really say anything more about it as RapidLib isn’t something I am familiar with. (But maybe the documentation explains about if it should be timer/event loop driven or if it should be run from a separate thread or if it provides callbacks when things happen etc…)


#5

Thanks for your help. Haven’t seen anything in the documentation, but for now I’ll run it in oscMessageReceived() and see how that goes. That makes sense to me since the functions need to continuously monitor the metrics that are coming in, either for recording or classification.


#6

That should be fine as far as the audio processing goes as that is called either on the GUI/message thread or the OSC network thread depending on how you’ve configured the OSC receiver. So in principle those should not interfere with what is happening in your audio thread (if you have one running).

Of course if the calculations take a long time that might not be so great especially on the GUI thread because then the GUI will get laggy or freeze. Also the Juce documentation for the real time OSC network callback seems to imply the callback method should not do much work…


#7

Ok great for now I have it calling on the OSC thread so should be good to go. Thanks.


#8

I haven’t used OSC yet, but I think you shouldn’t block the OSC thread either, since it has to empty the OSC buffer, otherwise you will be flooded after some point.
You have in OSCReceiver the choice, if you implement your callback as realtime callback or as callback on the message thread, which has the said implications on GUI freeze.

I would suggest to check out ThreadPoolJob. Benefit:

  • no overhead creating a thread
  • no blocking of GUI or AudioThread
  • independent of machine specs