Why no MemoryMappedAudioFormatWriter?

Hi all,

I’m working on an implementation of real-time audio file streaming I/O to allow playback and recording of long audio files, and lately I’ve been having a peek at the Tracktion Engine’s code for some guidance…

I’m wondering — Why does TE read from audio files using MemoryMappedAudioFormatReader, but write to them using a regular AudioFormatWriter? Why not open a MemoryMappedFile with AccessMode::readWrite and use that memory map for both reading and writing? On the surface it seems like a simpler scheme than writing on a background thread…

Of course with memory maps the OS can also decide to write changes to disk at any time. From a glance at the documentation (UNIX/Windows) it appears that these writes occur more often when exclusive = false to ensure synchronicity for all readers, but with exclusive = true writing only occurs when the file is unmapped or when the OS discards the page from memory, which is a potential bottleneck as RAM fills up. But with moderate RAM usage and exclusive = true, are page discards frequent enough to cause significantly worse performance compared to normal file writing?

TL;DR: Is there a reason why writing to memory mapped files is unsuitable for real-time audio recording?

Maybe because the OS doing memory paging can interrupt audio processing during the sync to disk?

I realize now that my question was poorly worded… I’m not asking whether memory maps are real-time-safe or not — they are certainly not, for exactly the reason you just described. My implementation is buffered, so the memory map is not accessed directly by the audio callback, and I can make the buffer as large as I need to account for any paging- or disk-access-related delays.

My question is about the amortized performance of memory-mapped writing vs normal writing to disk. My understanding is that each individual read and write should be faster with a memory map than with normal file I/O, but unlike with normal file I/O we don’t have strict control over when and how often writing happens, because the OS can flush the page to disk at any time. It seems to me from reading this documentation (UNIX/Windows)) that flushing to disk occurs more frequently with exclusive = false to keep all the readers in sync, but with exclusive = true it only occurs on page discards and when the region is unmapped (this is my impression from the documentation, haven’t implemented it yet, could be wrong).

I’m wondering if in practice, with exclusive = true, those OS-triggered writes are infrequent enough to make memory maps perform comparably to normal file writing for audio recording, across platforms.

Of course the most surefire way to an answer would be to write a MemoryMappedAudioFormatWriter myself and profile it against an AudioFormatWriter, I’m just wondering if anyone has been down this road before and can save me from a potential dead end.
And from the conspicuous lack of a MemoryMappedAudioFormatWriter in JUCE, and the choice to read and write audio using 2 different file I/O methods in Tracktion when the facility for both is in MemoryMappedFile, I suspect their developers may have run into this problem before…

@maxpollack did you manage to get any answers on your questions?

I’m in need of a MemoryMappedAudioFormatWriter and I wondered if there is some specific reason for why there is no such class in the JUCE codebase.

The advantage of the MemoryMappedReader is the access at random positions.
For the writer it is expected to write a continuous stream, so MemoryMapping doesn’t seem too plausible to me.
The other advantage of using a BufferingAudioFormatWriter is, you can freely choose the audio format. The memoryMapped is only available for WAV, last time that I checked.


This depends on the use case. Writing at random positions might be desired.
Actually, In my use case I do want to write to an audio file starting from some offset. Once I start writing, it will be a continuous stream. But, all the JUCE audio file writers start from sample 0, as far as I know.

You suggested BufferingAudioFormatWriter, but I cannot find any class named BufferingAudioFormatWriter in the JUCE doc.

AFAIK most audio formats are agnostic to any time continuum outside. Sometimes a placement within a context is provided via meta data. But that is often platform dependant or even host dependant.

I think you would have to pad it with silence to be sure it actually fits.

Looking at the API of AudioFormatWriter, there isn’t even a way to modify the position…

Apologies, that was from my faulty memory. I meant AudioFormatWriter::ThreadedWriter of course.

For my application, I just want to be able to write at a certain sample position (assuming it fits within the existing file size, you can also assume the file contains zero’s). In my case it doesn’t have to be a mapped file. My problem is that there is no JUCE class to write audio to a file at a specific position. It’s possible to read from a specific position, not to write to one.

Yes, it is not even a matter of MemoryMappedWriter, but audio file writing in general.
Would make an interesting FR.

My take on it would be, that it takes so much knowledge about the existing audio, that it makes more sense to read the old file and write into a new temp file mixing with the original or finding the correct position or whatever operation your use case needs.

To move the write position, could you call setPosition on the OutputStream* you pass to the AudioFormatWriter constructor? Seems like that method is not guaranteed to succeed but at least it will return false if it fails.

No concrete answers, but I suspect the reason has to do with the lack of control over when the paging system decides to flush the memory map to disk. Even with exclusive=true the OS can decide to discard a page at any time, triggering a file write… so we can only impose a strict limit on the frequency of file writing using traditional file IO methods.

I don’t think this will work, since you disregard the file header, and for compressed formats it might be impossible to find the correct file position, where a specific sample is written.

I could indeed call setPosition and do a bit of calculation to find the correct byte position from a sample position. The MemoryMappedAudioFormatReader class has internal helper functions (sampleToFilePos, filePosToSample, sampleToPointer) that take care of this. It would be useful for me if these helper functions could become public functions so that I can use those functions to write to a specific sample position in the file.