First of all: I’m not deep into network topics but I’m willing to learn
I have two applications, one that runs on an embedded ARM/FPGA running Linux chip without any GUI and a GUI application currently running on MacOS for development purposes which should be ported to Android & iOS in future.
The embedded device does some heavy DSP computation and sends out results for visualization to the GUI app. They find each other via the juce
NetworkServiceDiscovery and then talk via OSC, which works great until now. However I’ve come to a point where I need to visualize really fine grained data that has a serious payload, I’m talking about nearly 500kB per chunk which will update at a rate of maybe 5 Hz. As discussed in the forums for multiple times, OSC blobs or UDP in general seems to be unreliable with blob sizes over 500 Bytes, so splitting up the data into 500 Byte pieces and rearanging them on the receiver side (as it is not known if all packets will arrive in the right order) is a massive overhead, I’ve done this for smaller pieces of data but it gets nearly unusable for chunks of this size.
So my question is: What is the recommended way of handling this? I’m willing to go use something different than OSC to implement this. How, for an example, do video streaming services handle streaming of high resolution videos, beneath compressing data? Or how do protocols like Dante handle streaming high channel counts of uncompressed audio under realtime constraints? Or if I should go for splitting data into thousands of UDP packets, are there efficient algorithms that handle the rearrangement of the data on the receiver side?
Any ideas are highly appreciated!