Inefficiency of AudioSampleBuffer for large 16-bit files?

Hello! Two questions.

I’m pre-caching large quantities of data from CDs to get really fast response times (and to avoid destroying the drive on looping) but I’m realizing that my buffers are twice as large as I imagined… and that’s because AudioSampleBuffer is storing everything in the size of a single float, 4 bytes, and my words are 2 bytes, CD words.

Any simple way around this?

Second, with modern virtual memory systems, what are the issues involved if I do, say, load up a gig or more of memory into one sample, assuming that I access it sequentially (so paging works nicely?) A single CD holds about 600MB, but that’d be 1.2 gig with the doubling effect…

It works fine on my machine, but I have a lot of memory. How would it work on an old, poorly memoried machine? (which I’ll get but interested to see…)

[quote]
Second, with modern virtual memory systems, what are the issues involved if I do, say, load up a gig or more of memory into one sample, assuming that I access it sequentially (so paging works nicely?) A single CD holds about 600MB, but that’d be 1.2 gig with the doubling effect…

It works fine on my machine, but I have a lot of memory. How would it work on an old, poorly memoried machine? (which I’ll get but interested to see…)[/quote] That whole app would become slow and unresponsive, audio will stutter, because of the paging.
To avoid this, you have to work with Streams (or some other kind of intelligent cache-mechanism) which only load the actively used audio-data in the RAM (via a background-thread)

That whole app would become slow and unresponsive, audio will stutter, because of the paging.

Hmm, are you sure that is really true? That’d be true if we were random accessing the data - but if we are streaming it linearly, shouldn’t any reasonable page caching algorithm do a decent job, as long as I make sure that my stream is buffered so the hit from the paging doesn’t appear in the DSP thread?

I’ve successfully worked with “huge” datasets on “small” machines quite a long time ago this way… but that was when memory was much, much more expensive and all was different…

And what are these Streams, where can I get 'em?

[quote]Hmm, are you sure that is really true? That’d be true if we were random accessing the data - but if we are streaming it linearly, shouldn’t any reasonable page caching algorithm do a decent job, as long as I make sure that my stream is buffered so the hit from the paging doesn’t appear in the DSP thread?
[/quote]
I would not trust the OS paging algorithm, its not designed for using that way. Every modern Audio/Video App use a internal caching algorithm which preloads only the AudioData which wants to play.

[quote]
And what are these Streams, where can I get 'em?[/quote]
nothing special, something like AudioFormatReaderSource --> BufferingAudioSource (read from that)

Oh, good, I think we’re on the same page :wink: then! (Thanks for the chance to chat on this, I really appreciate it…)

I don’t directly have my big cache being read on the same thread that actually fills audio stream, in other words, it’s NOT being read as part of the call to getNextAudioBlock() from the AudioDeviceManager. I have my own buffered audio source (I don’t love the JUCE one, sorry Jules, mine’s smaller and doesn’t contain its own thread) that seems to work.

OK. So I think we are all in agreement that if I either read disk or hit my large buffer on the audio thread then I’ll have issues.

What I’m doing is somewhat different. I have the huge cache, which I expect on small machines to be paged out most of the time. It’s still to my advantage, because I don’t have to keep reading the CD while the user is looping on it, which is noisy on small machines and laptops, and physically wears the drive out, even (and CD seeks are wicked slow…)

There are three threads here. The fetch thread copies the data to the huge cache from the original source: hard disk, CD or even a URL. The buffering thread processes samples from that cache, and puts it into my small circular real-time buffer. The audio thread, which is run by Juce and not me, copies data from the small real-time cache into

So I’m expecting that huge cache to regularly be paging in big chunks of the audio from swap, and blocking for a few ms when it does. That’s fine.

But I can do that, and my audio thread should continue to run smoothly, even on a small machine… or, is there a reason not?

Again, I’m not seeing the slightest issue at all - but it’s going to be weeks before I get to really test it on a variety of machines, wanna avoid bad surprises…