BufferedInputStream issue and fix

Hi Jules,

The current implementation of the BufferedInputStream is quite strange.
If you read more than the buffer size then it calls ensureBuffered for each byte and copy byte per byte.

Please find a new implementation which is quite faster in this case.
I’ve spot that using my CoreImage image loader which should read big blocks.


Comments are welcomed.


Thanks - that’s interesting. But there were some deliberate design features that you’re ignoring there, like the overlap size, which keeps some recently accessed data to allow some back-tracking without having to re-read from the source, which was vital for reading from things like CDs. Maybe it just needs your changes to the read() method, but not the ensureBuffered() method?

The only thing I can say is that for only sequential reading you can have a lot of memmove that should’nt be there.

I don’t get the use of

while (bytesRead < bufferSize)
buffer [bytesRead++] = 0;

There is no reason to blank the end of the buffer when you’ve done reading the file.
That’s up to the user to take this into account.

In my code I take into account some backtracking as I memmove the data that can be saved.

For sure my implementation is faster for sequential only reading.
(no memmove)
For big back tracking, same speed I would say.
small back tracking would need some test, not sure who would be faster.

But maybe indeed only the read modification can be enough
with the remove of the end blanking(no real speed gain but avoid some difference between normal InputStream and buffered one)


It’s back-tracking on devices with a slow seek speed that would have problems. True, blanking the end of the buffer can be removed, I’m not sure why I left that in there.

Indeed I didn’t thought to slow seek speed devices.
Read fix is really enough then.