As a spin-off from some work I’m doing on tracktion, I’ve just added a new class MemoryMappedAudioFormatReader.
This allows some audio formats (WAV and AIFF only) to create special AudioFormatReaders that pull their data directly from disk via virtual memory-mapping. That means you can read from them with incredible speed and efficiently, as long as you define the region you want to access beforehand.
Hope this is useful to some of you, and since it’s very new and experimental, I’d love to get any feedback/bug reports about it!
First post, so thank you for this amazing library :)
I am fiddeling around with this class and it's pretty hard for a noob like me without any demonstration code or a more detailed documentation.
Basically I am trying to achieve DFD streaming for a sampler. I got it working by replacing the AudioFormatReader "&source" in the SamplerSound constructor with a MemoryMappedAudioFormatReader. But if I map the entire file via
source.mapEntireFile();
It loads everything into memory. Most likely I don't understand the point, but how can I do real disk streaming? Is it possible to only map the section that is read by renderNextBlock()?
I also have the same question as Chrisboy2000, unfortunately my googling hasn't helped me find the answer. I am trying to implement the MemoryMappedAudioFormatReader on a bunch of samples I am using for a keyboard/sampler app. But using mapEntireFile() puts the whole file into memory and when I max out the machines memory, the OS kills the app.
Can you please provide a brief description of what concepts I'm missing? Or a direction to a source that will explain this to me?
My WAV audio sampler project successfully maps entire audio file (my sample set) after creating memorymappedaudioformatreader(). I am trying to implement this new reader into the synthesiser class based on the Juce demo. What is the correct method for implementing this reader?
Ive worked on this a lot but I'm stumped, pseudo code would be greatly appreciated!