StreamingSocket::read() is broken or the documentation is wrong

Documentation says:

If this flag is false, the method will return as much data as is currently available without blocking.

However if there is no data, this function will block until there is at least 1 byte.

Example program here: https://github.com/FigBug/juce_bugs/blob/master/StreamingSocket/Source/MainComponent.cpp

The line printf ("Why do I never get here?\n"); is never hit.

It looks like the blocking state of StreamingSocket wasn’t being set to match that argument, unlike DatagramSocket. I can’t see a reason why that would be the case so I’ve made the behaviours match which should fix your issue:

Awesome, thanks.

I also posted this thread a while back that didn’t get any response, but I think the API of Datagram socket is broken: DatagramSocket clarification

If you ever use blockUntilSpecifiedAmountHasArrived you will lose data unless the UDP packets received exactly fit in the buffer provided.

I think it should be renamed to blockUntilPacketArrives and the behaviour should be changed to either not block when false or block until one packet arrives when true.

The current behaviour of combining a bunch of packets into one buffer makes it impossible to find the start and length of each packet. And truncating the last packet is silently throwing away data, I don’t see how that can ever be correct or useful.

Hmm, it’s not ideal but I can’t see an easy way of fixing the behaviour without breaking a lot of existing code that relies on it… I guess the best way to make sure that you don’t lose any data is to do as you say and call read() with 65507 as the max size and set it to non-blocking.