Best practice to transmit bigger chunks of data over ethernet between two OSC applications

First of all: I’m not deep into network topics but I’m willing to learn :wink:

I have two applications, one that runs on an embedded ARM/FPGA running Linux chip without any GUI and a GUI application currently running on MacOS for development purposes which should be ported to Android & iOS in future.

The embedded device does some heavy DSP computation and sends out results for visualization to the GUI app. They find each other via the juce NetworkServiceDiscovery and then talk via OSC, which works great until now. However I’ve come to a point where I need to visualize really fine grained data that has a serious payload, I’m talking about nearly 500kB per chunk which will update at a rate of maybe 5 Hz. As discussed in the forums for multiple times, OSC blobs or UDP in general seems to be unreliable with blob sizes over 500 Bytes, so splitting up the data into 500 Byte pieces and rearanging them on the receiver side (as it is not known if all packets will arrive in the right order) is a massive overhead, I’ve done this for smaller pieces of data but it gets nearly unusable for chunks of this size.

So my question is: What is the recommended way of handling this? I’m willing to go use something different than OSC to implement this. How, for an example, do video streaming services handle streaming of high resolution videos, beneath compressing data? Or how do protocols like Dante handle streaming high channel counts of uncompressed audio under realtime constraints? Or if I should go for splitting data into thousands of UDP packets, are there efficient algorithms that handle the rearrangement of the data on the receiver side?

Any ideas are highly appreciated!

I do this kind of things in a very similar configuration: I stream live video from an ARM based small computer with Yocto Linux. Many clients can connect simultaneously and receive this video. I also use NetworkServiceDiscovery for auto recognising which works well but for everything else I use typical TCP/IP communication via sockets. There are many different possibilities how to implement such thing in details but TCP/IP is the way to go. You can use for example InterprocessConnection and InterprocessConnectionServer - should be enough for such task: you are limited by connection parameters and how fast both sides can process your data.

1 Like

Thank you, somehow I always thought UDP would be the way to go for high data rates, but looking at all the tcp classes looks quite promising.

Especially the InterprocessConnection class, I always thought it just handles interprocess communication via named pipes so I wouldn’t have looked at it :wink:

Now if I have multiple different types of realtime data that I want to send between both endpoints, is it a good idea to start multiple InterprocessConnection instances working on the same port but with a unique magicMessageHeaderNumber so that only the matching endpoints will receive the right data or should I go for a different port for each type of data?

Edit: Is it even possible to use the same port with multiple instances :thinking: Sorry, network is a topic I’m really not so deep into

Your ARM device should be a server, so you need an InterprocessConnectionServer object there and every client should create its own InterprocessConnection. A client should ask for data and then you can start streaming or send just one data pack etc. For example you can prepare your message as a ValueTree with all needed information about your data types, store your block of data as a MemoryBlock in this tree, write this Value Tree to a MemoryBlock and send as a message via your InterprocessConnection. Or you can prepare a custom message format. A unique magicMessageHeaderNumber can also be used. If you need to stream different types of data independently then you can use different servers. In my software I combine all these possibilities: for example I have a video streaming server, a file server and a database server. They all work independently but they react appropriately to custom requests.

Thank you for the detailed explanation, I think I’ll go for a custom header as you proposed.

However I’m not sure if I really got the process right. This is what I think that should happen:

  • First I need to start an InterprocessConnectionServer and call e.g. beginWaitingForSocket (1234).
  • Then on the other end I create an InterprocessConnection and call something like connectToSocket (ipOfARM, 1234, 1000) --> I try to connect to the port that I passed to beginWaitingForSocket on the server end
  • If all works well, the server recognizes this and will call createConnectionObject. Now I have to create an InterprocessConnection instance on the server side which will be owned be me, not the server instance and the pointer passed to the server is only used so that the server initializes the connection object to be bound to the other endpoint. No need to call connectToSocket on this end. If the server initialized the connection object, connectionMade will be called on the server side connection object?

Did I get everything right?

Exactly. You can store your new connections created by the server in an OwnedArray being a class member of the server. And of course you have to create your own connection class with connectionMade, messageReceived and connectionLost methods implemented.

Btw. UDP can be used to transmit a lot of data but you may experience lost packets etc. If you do not care, UDP transmission is generally faster because there are no so called acknowledge packets involved.

Yes this was what I had in mind, however there is this relatively small size limit of UDP packets (as described here). So how do this limitation and much data work together in practice?

The UDP protocol does not provide fragmentation, so you should assemble everything by yourself. So if you need to have full control over your data, just use TCP/IP.