Problems with URL::createInputStream

Hey Jules, hey all, we're reading data from our server via <URL::createInputStream>:

const int connectionTimeOutS = 30;
const ScopedPointer<InputStream> in(m_url.createInputStream(usePostRequest, openStreamProgressCallback, nullptr, String::empty, connectionTimeOutS * 1000));

The server needs 90 seconds to process the request, but sends some text saying "Wait\n" every 5 seconds. For most users this works flawlessly, <InputStream> is created almost immediately. We're then periodically calling <in->readNextLine()>, get a "WAIT\n" line every 5 seconds and the data after 90 seconds. No problem with timeout at all.

But: For some users, we are stuck in the line of creating <InputStream> until the timeout is hit after 30 seconds.

It we then change <connectionTimeOutS> to 1000 seconds, the creation of <InputStream> needs 90 seconds, until the whole data is received from the server at once. The bad thing is, that we cannot savely close our program during these 90 seconds.

It happens on both Windows and Mac. So I am wondering if anyone stumbled across this and has an idea about this issue?

Have you debugged into it to find out more about what's going on internally when this happens? Would be helpful to have some more clues about that.

Thank you for your answer, Jules.

We could not reproduce the issue locally, it only happened for a user. This user was nice enough to install a version of our software with logging activated, basically like this:

int connectionTimeOutS = 30;
writeLogWithCurrentTime("Before InputStream creation");
const ScopedPointer<InputStream> in(m_url.createInputStream(usePostRequest, openStreamProgressCallback, nullptr, String::empty, connectionTimeOutS * 1000));
writeLogWithCurrentTime("After InputStream creation");

The log told us that it took about 30 seconds until <InputStream> got created.

In a modified version, with <connectionTimeOutS> set to 1000 seconds, creation of <InputStream> took until all data was received, 90 seconds in our case.

My gut feeling says that his Internet router might buffer the server's answer until the timeout is hit or the request is finished.

Hmm.. The odd thing is that you said this happens the same on OSX and Windows, but the juce code that handles that timeout value is entirely different on those OSes, in each case it just passes it through to the OS functions that handle the HTTP stream, and I don't really know how they deal with it internally.

My gut feeling says that his Internet router might buffer the server's answer until the timeout is hit or the request is finished.

I think a single request thats running > 1min, although it does not transmit any useful data most of the time, is quite unusual.

Actually i would think about splitting this into multiple requests to solve the problem.

The server could directly answer the first requests with an estimated processing time, after which the client starts polling (with seperate requests) until it gets the results. An even better solution would be some sort of bidirectional connection with an asynchronous callback, but my first suggestion would involve less changes to your current implementation.

Thanks lodun - actually, your first solution is what I am working on now - thank you very much for your suggestion!