I have quite a lot invested in Juce being at least as good as any other tool for reading audio files, so my theory is that if iTunes and Ableton can recognize an audio file, then Juce should be able to as well.
And in fact, yes, your sample file plays fine in various other tools I have, but not in my own program, so I’m interested.
I took a quick look at your file in a hex editor and the AIFF spec.
The first line in both is basically the same: first it identifies the file as an IFF, then it has the length the file data in bytes, then the identifier as a WAVE chunk within that file.
Urg, I am interrupted. I’ll Submit this and try to get back it later this evening. Interesting puzzle!
Clearly I didn’t get time to finish this last night! (We suddenly ran out of disk on our main server and lots of bad things happened as a result… I’m still not done cleaning up… )
Interesting, jr! Given those headers that I posted, can you say whether the offending file is ADPCM?
Your earlier posts says that Juce asserts on an ADPCM… but my Juce application, running in debug mode, played this file for me without failing (but the sound was trashed…)
It behooves us as clients of Juce to get this cleaned up. It has to be achievable as iTunes does it…
We should either write our own module, or find a fix in the current source and present it to Jules.
WAV is a subset of Microsoft RIFF. The main ‘chunk’ is RIFF<size.4b>WAVE.
There are then two required subchunks, fmt and data. "fmt " is the ID for the format sub-chunk, which should break down:
<size.4b>
<format.2b>
So the sub-chunk size of both is 0x10 (0x10000000 in little endian), which is correct, since size doen’t include ID and size.
Format of both is 1 (0x0100 in little endian), which is raw PCM.
The top file is 2 channels, the bottom 1.
The sample rate of both is 44,100 (0x44ac0000 in little endian).
Byte rate for the top file is 0x15888 (88,200), bottom is 0x204cc (132,300).
The block align for the top file is 2 (2 bytes for one sample, all channels), the bottom is three (3 bytes for one sample).
Bits per sample is reported as 16 for the top file, 24 for the bottom.
So, the bottom file makes sense, it is 24 bit mono 44,100 file. The top seems bogus to me, the numbers add up like an 8 bit stereo file, but then it reports itself as 16 bits per sample. I’d be curious what happens if you use a hex editor to change the last word (2b) before the “data” subchunk from “1000” to “0800”.
My reading of this is that there are three variables that can be set independently in the file, they being bit depth (8/16/32), file size and stereo - but that setting any two determines the third one.
Everyone agrees what to do if all three numbers are consistent - the question is what to do when they are not.
It appears to me that Juce throws away the “channels” number and takes the bit depth if there is a conflict, whereas iTunes and Amadeus keeps the channels setting and throws away the bit depth in that case.
My theory is that a tiny tweak to Juce would make it work the same as itunes and at least one other third-party application. I’d be in favor of that, personally, because then I don’t have to answer the question, “Why does it work in itunes?”
This is, I believe, completely independent of any problems Juce might have reading ADPCM - which we should also address…
Because it is probably a 16 bit file and it is the other parts of the header that are wrong. Look at the original question, Juce thinks it is 8, the user thinks it is 16… Changing 1 byte was the quickest way to see which part was right and which part was wrong.
Just to follow up, it is almost certainly a 16 bit file, with incorrect bytes per. From Juce’s WAV reader:
if (chunkType == chunkName ("fmt "))
{
// read the format chunk
const unsigned short format = (unsigned short) input->readShort();
const short numChans = input->readShort();
sampleRate = input->readInt();
const int bytesPerSec = input->readInt();
numChannels = (unsigned int) numChans;
bytesPerFrame = bytesPerSec / (int)sampleRate;
bitsPerSample = (unsigned int) (8 * bytesPerFrame / numChans);
The bits per frame is being calculated from the bytesPerFrame. This would come up with 8 from the file provided. bytesPerFrame is also getting calculated (instead of using alignment). The reader does multiple wav variants, and lots of apps get different header parts wrong, so I’d have to look more closely to see what is reasonable to change here, but it matches the data. Changing 16 to 8 blows everyone else up, Juce is calculating bitsper from a field that is incorrect for 16 bits in the file.
I was going to to snarkily add that my idea of a ‘fix’ would be to give whoever generated the header wrong to begin with a few good whacks with a sock full of laundry detergent, but one of my long time partners in crime just reminded me that I made quite the ass of myself at a conference back in the dark ages over including some of the same fields in the format to begin with.
I argued that they were redundant and easy to get wrong, so I guess I can’t have it both ways…
The fix in the current tip unfortunately doesn’t work with the file linked in the first post. The file uses wrong bytes/second (88200 instead of 176400) and bytes/frame (2 instead of 4) fields.
The following code reads the file correctly:
if (chunkType == chunkName ("fmt "))
{
// read the format chunk
const unsigned short format = input->readShort();
const short numChans = input->readShort();
sampleRate = input->readInt();
const int bytesPerSec = input->readInt();
input->skipNextBytes(2);
bitsPerSample = input->readShort();
numChannels = numChans;
if (bitsPerSample < 0 || bitsPerSample > 64)
{
bytesPerFrame = bytesPerSec / (int)sampleRate;
bitsPerSample = 8 * bytesPerFrame / numChans;
}
else
{
bytesPerFrame = numChannels * bitsPerSample / 8;
}
It only uses sampleRate, channels and bitsPerSample from the header, but takes bytesPerSec into account if the bitsPerSecond field seems broken (which I assume for values below 0 or above 64).