VST 2.4 double floats

Thing is though we are talking about floating point numbers not fixed point numbers, so even those more accurate graphs are irrelevant! You need to put a quantizer in the mantissa, which leads to a completely different result as precision in effect increases with a bit with each halving of amplitude as compared to fixed point.

Now what everyone has missed thus far is that the absolute error of 1.2e-7 is 2^-23, so the difference between running full double precision and single precision for audio buffers is the least significant bit of a normalised float value of 1. So if you want to consider this in terms of fixed point numbers, then you would need to use a fixed exponent of say -1, and have all your audio data between 0.5 and 1.0. So for something in the ballpark you could look at the difference between a 24 bit fixed point sin wave and a 23 bit fixed point one. The difference will be inaudible. Single precision floating point numbers are more than enough for passing between processing units for high quality audio use.

This does seem to be the case, KVR forum goers agree.

[quote=“Izhaki”]Andrew,

I just need to say something before I start. You say me and TheVinn are biased. I don’t think it is about being biased, I think it’s about what we believe is defending text book science, mathematical proofs and real life measurements (made by many great people over more than 60 years).
[/quote]
Ok, can you please post references to anything relevant to back yourself up here? So far I know of no papers that deal with the precision needed of passing buffers between processing units. I know of some papers that point out quite rightly how crap direct form 1 and 2 biquads are, which has nothing to do with this discussion.

I am a member of the AES and will be submitting a paper on a modified trapezoidal integrated state variable filters structure with excellent noise properties. I currently write what is generally viewed the highest quality analog modelling plugins available, and will be more than happy to write a paper on the appropriate use of precision in algorithms if you think that would be useful. To me it is all remedial stuff that shouldn’t rate a mention.

[quote=“Izhaki”]So I’m seriously interested in your discovery, but I can tell that you are very likely to not seeing the whole picture. I have a feeling you are convincing yourself that your findings are right, while overlooking some critical steps.

Could you please explain the last line? It is clear you are finding the level difference between the sum of the float summation and double made float summation.
[/quote]
The last line is the absolute error of the calcuation. I was quoting the dB level of the error compared to that of the full scale signal: 20 * log10 (error / signal). Really what is needed here is the rms power of the error, but I was just keeping the example to something that could easily be reproduced by you and anyone else reading this thread.

[quote=“Izhaki”]But for starters, you should really have 20log10(fddsum/fdfsum) to find the dB level difference between them. As I mentioned earlier, 0 > 1 is 6 dB, but 60000 > 60001 is far less than that. If you do the find the dB difference between the measurements I’m sure you’ll be happy to see that these are extremely small, well below 1 dB (anyway, this should be the result).
[/quote]
Read the above.

[quote=“Izhaki”]My principal question is: what does the comparison between these two sums prove? That there’s tiny difference between them? If this is the case, than you probably know that on a specific sample a sine-wave and a square-wave has exactly the same sample level. On the same basis, I can show you identical or very close sample levels when comparing between Beethoven’s Violin Concerto and Pink Noise.
[/quote]
It shows that the maximum error is in the least significant bit of a single precision float.

Yes. If you would be so kind as to provide a suitable test that includes fourier analysis to calculate the rms power of the error go ahead.

[quote=“Izhaki”]I have to make use of a key example in digital audio here. Could you please have a look at these graphs, and tell me if it is clear to you that the difference in sample value between each of the samples in the top and middle graphs is going to be very small? Never above -96dB? The last 8 of 24 bits? Never more than 0.000015258 if looked at from a float point of view (2^8/2^24)?
[/quote]
We are looking at floating point numbers here not fixed point, and the error is 1-bit.

[quote=“Izhaki”]Now can you see how much damage this tiny level changes makes on the overall signal? Can you see 40dB boost of some harmonics? Can you imagine how the second (truncated) and third (dithered) signals will have a different THD measurement?

Am I pointing you the right direction?[/quote]
No, I’m still heading in the same direction, how about you?

This does seem to be the case, KVR forum goers agree.[/quote]
Wow, someone ok KVR said so, well that’s a relief! :wink:

Single precision floats are sufficient for passing between processing units, and this is the only point I am trying to make, so thanks TheVinn for sticking with it and taking a fresh look at the reality of what is going on. Double precision numbers are most likely needed internally for certain tasks in every processing unit, but dogmatic adoption of double precision is wasteful of resources and will not lead to any audible difference in audio quality.

Ok, so an easier way to go without involving fft is to compute the signal to noise ratio directly [1]

loop numsamples:
    double ddsum = 0
    double dfsum = 0
    loop numchannels:
        double channel = the nth channels audio data as a double precision number
        double fadergain = the nth channels gain fader for the summing mixer
        ddsum += channel * fadergain
        fdsum += (float)(channel) * (float)(fadergain)
    float fddsum = (float) ddsum
    float fdfsum = (float) dfsum
    double error = fabs (fddsum - fdfsum)
    double ratio = fddsum / error
    rmssum += ratio*ratio
signaltonoise = 10*log10 (rmssum/numchannels)

The above results in a signal to noise ratio of 138 dB which is loads.

Thanks Izhaki for making me come up with a more rigorous measure of what is actually going.

[1] http://en.wikipedia.org/wiki/Signal-to-noise_ratio

Right,

This thread is spiralling out of control, and to be frank, with all my interest, I’ll have to come back to this later when I have a bit more time to look at everything Andrew wrote and perform my own tests…

We have months to settle this so lets just focus on developing plugins, and have this thread a bit sparser.

TBC…

Andrew,

You need to slow down man, as I’m starting to lose you completely.

loop numsamples:
    double ddsum = 0
    double dfsum = 0
    loop numchannels:
        double channel = the nth channels audio data as a double precision number
        double fadergain = the nth channels gain fader for the summing mixer
        ddsum += channel * fadergain
        fdsum += (float)(channel) * (float)(fadergain)
    float fddsum = (float) ddsum
    float fdfsum = (float) dfsum
    double error = fabs (fddsum - fdfsum)
    double ratio = fddsum / error
    rmssum += ratio*ratio
signaltonoise = 10*log10 (rmssum/numchannels)

Is the wikipedia equation you are using is 10log10(Asignal/Anoise)^2 ? Cause this one is for power and analogue signal.

Digital SNR is peak to peak and uses 20log10. And in order to measure it you need… noise, noise is a broadband signal and normally the peak frequency is taken in SNR measurements. And everything we talk about might be weighted (how exactly this can be done with your algorithm)?

Your ‘error’ variable is not noise. It’s a simple delta between two sums and you should not accumulate it.

I maintain that:

fabs (fddsum - fdfsum)

is incorrect. You should take your sums and put them into a series/waveform (something like):

float fddsum[nth sample] = (float) ddsum
float fdfsum[nth sample] = (float) dfsum

This will produce two signals that we can test, compare and perform measurements on.

And to be perfectly correct: as our ears can discren signals below the noise floor, the dynamic range will be slightly higher than the SNR measurement.

[quote=“Izhaki”]Andrew,

Is the wikipedia equation you are using is 10log10(Asignal/Anoise)^2 ? Cause this one is for power and analogue signal.

[/quote]

Ahh, thanks for spotting that, I wrote my post quite late last night. Here is the correct code which results in an SNR of 145 dB, which is better than I previously quoted. This is an unweighted measure. Please don’t get bogged down in the details of weighting, and instead concentrate on what is important, the fact is that using single precision floats in buffers between processing units is easily sufficient for high quality audio. Take you time if you can’t quite get a grip on things, go ahead and do the tests yourself and convince yourself of what is going on.

    double mssignal = 0.0;
    double msnoise = 0.0;
    for (int k=0; k<numsamples; k++)
    {
        double ddsum = 0.0;
        double dfsum = 0.0;
        for (int p=0; p<numchannels; p++)
        {
            const double faderlevel = some fader level of channel p 
            const double signal = the signal at the kth sample of channel p
            const double dval = signal * faderlevel;
            const float  fval = (float)(signal) * (float)(faderlevel);
            ddsum += dval;
            dfsum += fval;
        }  
        const float fddsum = (float)(ddsum);
        const float fdfsum = (float)(dfsum);      
        const double noise = (double)(fddsum) - (double)(fdfsum);
        mssignal += (double)(fddsum)*(double)(fddsum);
        msnoise  += noise*noise;
    }
    mssignal /= double (numsamples);
    msnoise  /= double (numsamples);
    cout << "snr=" << 10*log10 (mssignal/msnoise) << "\n";

Andrew,

I think we should wrap this up for now.

You are clearly a very knowledgeable guy, and perhaps us not agreeing has to do with my lack of knowledge and understanding. What’s sure is we employ very different methods to tackle the problem.

If and when I have time I’ll run some tests and will present the results here. Then, I hope you’ll challenge my methods and conclusions, instead of me challenging yours.

Cheers!

[quote=“Izhaki”]Andrew,

I think we should wrap this up for now.
[/quote]
Sure, come along to the forum, say I’m wrong without backing up anything you’ve said or providing any references, and then walk away again, class.

[quote=“Izhaki”]You are clearly a very knowledgeable guy, and perhaps us not agreeing has to do with my lack of knowledge and understanding. What’s sure is we employ very different methods to tackle the problem.

If and when I have time I’ll run some tests and will present the results here. Then, I hope you’ll challenge my methods and conclusions, instead of me challenging yours.

Cheers![/quote]
You’ve not challenged my ideas in the least. You have corrected my sloppy code which I threw together in a hurry for these forum posts as I didn’t have time to do some lovely plots of very long ffts like I have done in my technical papers. Please feel free to conduct your own tests and convince yourself of the reality of the situation.

Again, can you please post any references, even just one AES paper, just one IEEE paper, just one internet page, anything that isn’t marketing blurb by someone trying to sell something to people who don’t know any better. Just anything that says that double precision is needed to pass between audio processing units, come on how long could that take if there is like you say you have 60 years of material to draw on? Me, well I’m just this one guy posting on a forum somewhere, how could I possibly be right in the face of all that literature?

Are we still debating this, Sheldon Cooper? Asking the question in the KVR forum and getting a response was enough for me:

[attachment=0]double_precision.png[/attachment]

Are we still debating this, Sheldon Cooper? Asking the question in the KVR forum and getting a response was enough for me:
[/quote]

I’ve not seen anything close to a debate thus far.

[quote=“andrewsimper”]
Again, can you please post any references, even just one AES paper, just one IEEE paper, just one internet page, anything that isn’t marketing blurb by someone trying to sell something to people who don’t know any better. Just anything that says that double precision is needed to pass between audio processing units, come on how long could that take if there is like you say you have 60 years of material to draw on? Me, well I’m just this one guy posting on a forum somewhere, how could I possibly be right in the face of all that literature?[/quote]

The text books I’m referring to are pretty much any book on digital audio.

If this is really your argument:

And if by passing you mean what one processing unit delivers to the another, I’m with you on this 100% from the word go. And have said this previously in this thread.

I’m not sure anymore what else you are saying, but these are my views:
[list]
[] Does double precision has any quality gains over single precision when audio is being processed? - In some cases yes.[/]
[] Are these quality gains measurable? - Definitely yes.[/]
[] Are these quality gains audible? - In some (extreme) cases yes.[/]
[] Is anyone implementing double precision processing within their plugin an idiot? - No.[/]
[] Is there a sense having an audio engine that is double precision throughout? - Yes.[/]
[] Do you agree with how Andrew Simper tests the error between single and double precision? - Not sure I understand what he did.[/]
[] Does it really matter? -No[/]
[] So why do you carry on debating? - I promise to stop.[/][/list]

Izhaki, I’m glad you agree with what I originally said:

[quote=“Izhaki”]
And if by passing you mean what one processing unit delivers to the another, I’m with you on this 100% from the word go. And have said this previously in this thread.[/quote]

where the previous statement was:

Sorry for not seeing what you posted previously, and thank you for agreeing with me. Your statement was somewhat buried in a bunch of irrelevant waffle which I filtered out. This was the only point I was making and I am glad my point is made.

Hi there Jules, this is my first post.

I’m trying JUCE for the first time, this is fun!

But immediately when I went to fill in a process function, I saw that it was float only. I know that it hardly makes any difference to sound quality (at least when levels are within a reasonably normal range) and I don’t want to get into a debate about that, but there’s strong public perception out there that plug-ins that offer 64 bit depth are of better quality. Just about every host offers it now, people want to use the highest spec available, I’ve measured no performance decrease between floats and double processing (in fact, double seems slightly faster) so I see this as a must-have feature.

Is there somewhere I should submit a formal feature request, or will this do it?

And/or, should it be possible in the mean time for us to hack in double support ourselves?

And I’m fine with adding another method to handle it. That’s how VST does it. And if you don’t want to fill it in you just indicate that in the canDo’s.

Otherwise, I’m totally liking what I see here. Thanks for the great product!

  • AQ

P.S. You might want to check your web server’s time. The post time shows off by about half an hour.

Thanks!

Hey there AdmiralQuality nice to see you here!

In order to keep the library consistent, offering double precision in the audio plugin callbacks would require a host of changes to other classes. Like AudioSampleBuffer, AudioProcessor, AudioSource, AudioDeviceIOCallback, etc… Essentially every existing JUCE class that supports float samples would have to also support doubles.

The alternative would be a plugin API that has the choice of double precision, but only single precision in all of the supporting classes and that could be confusing.

[quote=“TheVinn”]Hey there AdmiralQuality nice to see you here!
[/quote]
Hey TheVinn! (I always thought it was a single word, pronounced Thev’-in. :wink: )

Yes please! :slight_smile:

[quote]

The alternative would be a plugin API that has the choice of double precision, but only single precision in all of the supporting classes and that could be confusing.[/quote]

Lately in my plug-ins I’ve been writing double-only code and I make a converter function to wrap it for 32 bit for the process functions that want it. (Parameters are still floats though.) I find the performance penalty from that method to be almost undetectable on modern systems, and it’s saved me the headache of having alternate versions of the process functions. Maybe JUCE should go to double precision audio, and provide transparent conversion to the legacy API calls that still want 32 bit. (I’m aware that would break a lot of projects and probably won’t be a very popular idea here. It wouldn’t be so much an issue if there was a macro for the JUCE sample data type instead of explicitly declaring them as float, but alas, float is hard-wired in to everything…)

Hmmmm… I really want to use JUCE but it’s such a show-stopper because I know I’ll get endless emails asking for double precision.