I had a question regarding the imageReceived method on the CameraDevice::Listener. Would that be the proper place to do processing of video frames prior to display in a view? In particular I would like to detect objects in the frame using a haar cascade (using opencv). If an object is detected, then draw a box around it.
I've noticed that when I use the imageReceived memory usage in debugging shoots through the ceiling and upon shutting down the app I get an exception QTKit::QTBackgroundQueueRun EXC_BAD_ACCESS.
This can be recreated in the JuceDemo app with the CameraDemo included. In the CameraDemo.xpp just comment out ALL the code inside the imageReceived method, start the app, then click the take snapshot button. The button clicked method adds the cameradevice listener and then it is off to the memory races (on my machine).
I am using Juce 3.1.1 in XCode 6.1.1
If anyone has experience with video frame processing with Juce I would be interested in hearing about your app design.