The difference between 16bit and 24bit audio files loading

Hi,

I have an audio plugin.
I use AudioFormatReader to load samples to AudioBuffer.

Previously I always worked only with 24bit audio files, but decided to try 16bit and was surprised because the speed of loading with 16bit files is faster, but the size of used RAM is the same like with 24bit.

Can you describe why? Is it possible to use less RAM with 16bit files?

The audio data is converted to 32 bit float when reading in both cases, so that explains why you don’t see a difference in RAM.
And usually you read in small blocks anyway, so the difference will be very small.

For a mastered/normalised signal 16 bit is often enough, for signals which require headroom (e.g. during recording or music with much dynamics like classical music) you should prefer 24 bit (my personal opinion).

@Daniel, thank you for the reply. I noticed about 32 bit in the code comments. But wasn’t sure, that understood everything correctly.

Can you clarify about this? I use my plugin as a synthesiser and it’s a drums plugin (many groups of mics, dynamic layers, articulations, etc). So I need the access to the samples immediately. Because at the same time you can hear many of them and the next second it can be completely different set of sounds. All files are stored as base64 in one file on the hard drive.
So I decided that the best way is to load the whole kit to the memory. Am I wrong? Is there a better solution?

Loading all into memory is a good idea when you can afford it (considering that many instruments need 88 notes times several velocities it is not always an option).

I am concerned about the base64. This is a text format with quite some overhead (factor ~3 IIRC plus processing for decoding, although that is a trivial algorithm). What is the reason against storing the samples in any binary format?

I’m not sure that there is any other options. Because it the case of drums one instrument has many layers which can be played in a few seconds. Can you suggest something how to improve it?

The idea, that all information about sounds and instruments must be in one file (plugin library). So there is text information and audio files saved as base64. Maybe it’s not the best option but I don’t know how to improve it (and save it in one file. Sounds must be hidden from the end user). Again, maybe you can write the idea how to improve it?

Thank you!

In a similar scenario I wrote a bespoke file with a header of a known size where the text information goes (in our case json because of web folks). Then just appended the audio file (in our case a 64 channel ogg file).
To read it I use the SubregionStream, which wraps the actual file with the offset.

This already prevents an audio software to load it as regular audio file, but is ofc. easy to figure out.
Anything secure will add a processing cost to decode, so you will need to figure out the tradeoff.

Yes. I’ve done exactly the same (in the beginning I have a header in xml and than I have audio files in base64).

In general, now I’m happy how it works. But if I find the way how to use less RAM it will be great. That’s why I tried 16 bit, but didn’t get the required result.

You can write your own tables to store audio in 16 bit, and then your own players to playback 16 bit files. We did that for an app I worked on for that exact reason of saving RAM. Not sure if there’s an easy way to hack existing library code to do that, but it wasn’t too hard to roll our own solution.
Pretty sure we just used int16_t and typedef’d that as float16.

1 Like