Stereoscopy - Tinkering Preferably including Linux

I’m tinkering with a few USB bore cams and a Stereoscopy (Dual Camera webcam)… I’m looking at the Video demo, and I suppose need to be able to grab a pair of frames from 2 sources synchonised (at least as close to each other in time as possible / reasonable - given USB capabilities & throughput…)

My question is about the Video module… I can’t see how grabbing frames from 2 cameras can be done by one module (how can I identify the source camera device in an imageReceived callback)? Only thing I can think of doing is creating 2 camera devices, and registering 2 separate callbacks somehow (my CPP is still awful)

I’d ideally like to implement this on a Raspberry Pi, too, and am generally interested why Linux has no camera implementation (I mean, I can imagine it’s no walk in a park, but is it just not possible to open a /dev/video device and process it as presumably android does?)

I’ve tinkered with some simple MJPEG code, but not had a chance to test it. (Ought to do something handy in Linux)

Yes, you should create two separate CameraDevices and register two listeners, then the listeners can for example send frames to another place in your code via callback with a camera index.

On Linux you need to use V4L (video for linux). You should open a video device, initialise V4L, then you have to access a rolling buffer with frames and copy a frame from the buffer for your use.

1 Like

Thanks, @MBO - I’ll give that a go. To display how usless my CPP is, I never realised I could simply have 2 imageReceived handlers! {I’m better at low level stuff, I promise! :slight_smile: }.
If I get something working on a PC/Android device first I’ll tackle Linux.