in the docs for AudioFormat::createMemoryMappedReader() I read, that only a few formats support that, i.e. Wav and Aiff. I want to use it with ogg files.
I looked into the constructor of MemoryMappedAudioFormatReader and found, that I can supply a generic AudioFormat. So will this work with any AudioFormat and does this make sense with a compressed format?
Second question: how do I use the MemoryMappedAudioFormatReader::touchSample() method? It would make no sense to iterate over the whole block I guess. So how is this usually done? touch the first and the last sample of a block I’m about to read?
Thanks for the insights…
EDIT: just realise, that the MemoryMappedAudioFormatReader is pure virtual. For some reasons the subclasses are not shown in the docs, otherwise I would have noticed, that there is an actual class implemented for WAV and AIFF. So the question is still, is this feasible for OggVorbisAudioFormat as well?
I think the problem with compressed formats is fast seeking. This is probably why the pure virtual method is the method which allows you to have a random access to the audio data
getSample. You would need to implement this for ogg which may not be possible efficiently.
Not entirely sure. In equator, we have dedicated low priority thread which does nothing else than touch all samples in a loop so that the OS won’t swap it out if the use isn’t playing the loaded sound.
Disclaimer: I’m no expert on compression so take this answer with a huge pinch of salt .
Even for non-compressed formats, it would be appreciated if an expert on how the MemoryMappedAudioFormatReader works, would explain in detail how it should be used for getting the maximum advantage from it. I’ve changed for example the audio file reading code in my granular sound processing code to use it and it appears to work, but I am not sure if I am initially mapping too much (the whole file) and whether I am touching the samples too much/often etc.
so the seeking problem could be solved for CBR files, but for VBR it would indeed need to do a search. I don’t know, if ogg has the PTS and DTS / -timestamps like in video formats.
Eventually the solution would be a BufferingAudioSource (what I use currently), but having a dynamic buffer size, which the OS could then put into swap space. I havent done any manual memory management, but it’s worth a try.
…does it really touch ALL samples? I see that there is a parameter
frameSize, shouldn’t it be enough to touch one sample in each frame?
OK just checked the code again. We do a hop size of 512.
Also should note that it’s only worth touching samples at all if you’re doing real-time audio and want to make sure your audio thread will have hot data ready. If you’re just processing files off-line, there’s no need to bother with that kind of thing.
Hmm, but doesn’t it cause an assertion if the samples haven’t been touched before trying to access them? edit : Apparently not. I recalled that would happen.
thanks @jules to point that out.
In this case my problems are indeed realtime issues. I get buffer underruns in BufferingAudioSource, and I wondered if MemoryMappedAudioFormatReader would solve my issues.
But with fabian’s answer I probably stick to BufferingAudioSource.
If you need really high performance and are running 64-bit, memory-mapping is much faster - when we changed tracktion to use it, the performance improved significantly. But you have to use it correctly and have a thread warming-up the samples you’ll need.
The other advantage of memory-mapping is that if the machine has plenty of memory, the OS will use it. Certainly in tracktion if there’s enough memory, once an edit starts playing, the OS often just caches everything and never re-reads from disk.