I'm trying to understand the limitations of CoreAudio (or possibly Juce's current implementation)...
I am writing an application that will input/output (record/play) data on a devices AES/EBU input/output channels. I want to run two instances of my application to have one generate and output data on an AES/EBU output and have another instance record the input of the signal on another AES/EBU channel.
When I do this, I find that the bits being generated/written by one application are NOT exactly what I read on the other application. Most of the bits are the same but there are some that are not. I created a trivial bit pattern that I could easy detect (every other sample was 0 and 0.1 float 32 value). Upon play and then record, the 0 samples were correct but the 0.1 samples were not. They were rounded.
In actuallity, I set the sample to 0.1 which ends up being 0.100000001 in XCode (looking at the value in the debugger). The hex data is CD CC CC 3D. When the value is sent through CoreAudio out of and back into the AES/EBU I/O, the value is 0.0999999046 which is hex C0 CC CC 3D. As you can see, the value was changed just slightly.
I believe this is due to CoreAudio using float32 as its native format and subtle rounding errors that could be happening.
Does anyone have insight into the limitations of CoreAudio with respect to bit exactness of data being input/output?
Is there a way to configure CoreAudio or Juce to treat the data as binary data and not change the bits at all?