Streaming Sockets and nagle algorithm

It looks like streaming sockets always and aggressively set the nagle algorithm on (TCP_DELAY). If you are doing control of any kind, that’s a big problem - it puts a random extra latency on any small transmission.

Open to any way to change that - maybe an extra flag that gets checked? (Currently, the socket checks whether it is a ‘datagram’ and then sets nagle on).


Isn’t that what this line is doing? It’s setting TCP_NODELAY which disables Nagle’s algorithm.

That’s what it looks like now… :worried:
Somehow when I read that code the other day it seemed to be doing the opposite… and I was seeing an apparent delay, which seemed to go away when I did my own TCP_NODELAY it seemed to help…
I can’t see any way that code isn’t doing exactly what I want, now, and the debugger seems to agree. Can’t explain my false tests.
TL;DR - thanks, ignore.