I was going through the WavAudioFormatReader Class. There in the read method after reading the data from the wave file you are left shifting the data of each channel by 32-bitspersample value.
I wanted to know is there any specific reason that those code is added.
All the integer data that comes out of that method needs to be standardised as full-range 32 bit samples… the left-shifting is what does that.
So what will be the data in the OGG or FLAC files. Is those data are standardized as full-range 32 bit samples.???
well, yes, the whole point of having a common base class for audio readers is so that they all work the same way!