Just a heads-up. This one had me foxed for a while.
I noticed my client code was pumping lots of extra records into my server-side db, even though i was sure i was submitting only once.
Something like this:
The cgi script sends back plaintext, but if the lines are only terminated with “\n” readNextLine() reads the entire content and re-submits the request for each subsequent call! If I make the server send back “\r\n” as a line terminator, everything is hunky-dory.
I’m not sure which is the “correct” way to send back line-endings, but most of the scripts i’ve seen only use “\n”.
This will just be because an internet input stream can’t reposition itself, so when the readNextLine() does a seek(), it’ll have to re-start the stream.
The best way to fix this is to use a BufferedInputStream to wrap your http stream, which will read large chunks at once and keep them in memory to avoid re-reading from the source.
True, I’ll take a look at the readNextLine() method.
In fact what I’d probably do myself here is not bother with the stream, but use URL::readEntireTextStream and StringArray::addLines. If it’s a small amount of data, that’d be much neater and more efficient.
In fact what I’d probably do myself here is not bother with the stream, but use URL::readEntireTextStream and StringArray::addLines. If it’s a small amount of data, that’d be much neater and more efficient.[/quote]