I started to learning JUCE.
As the first project I wanted to create simple guitar tuner but I face some difficulties in frequency detection.
I got into tutorials and studied some of them - especially FFT connected one.

Let me show some clue part of my code.
Here are some definitions from my juce::AudioProcessor derived class.

For simplicity let’s assume num of samples is same as fftSize - to avoid additional FIFO.
In my processBlock function I copy data from buffer into fftData then I run performFrequencyOnlyForwardTransform and I try to get frequency from it.

I’ve done some research and I found out I need to find peak in that spectrum and then from: Freq = (sampleRate * fftBufferIdx) / bufferSize I can get frequency.

Reliable pitch tracking is not an easy task.
Do you need help about the coding part or the algorithm part?
For the algorithm part there are some threads about it already on the forum.

Thanks! With your code it seems to work properly.
I am testing it with guitar plugged-in

I didn’t expect it will be such complex issue as I am not familiar with audio engineering.
I expected to just run FFT, find a peak and calculate estimated frequency based on index of that peak and fft size. It occurs not to be so easy!

I’m glad to hear my pitch detection code is working for you!

yeah, it’s a pretty common misconception that FFT frequency bins are analogous to fundamental frequency. There may be more energy in a frequency bin that is not actually the fundamental - like an overtone or undertone, this would lead to an “octave error”.

What my pitch detector is doing is finding the period of the fundamental frequency by determining at what time interval the signal is most periodic – so it turns out it’s actually entirely a time domain operation, nothing here is being done in the frequency domain

It’s sort of cobbled together based on various sources I’ve found… papers, forum posts, StackExchange, etc. The actual periodicity calculation is a fairly standard time domain ADSF; the real magic is in the post processing “peak picking” algorithm, to prevent octave errors - that’s definitely a bit more home brewed…

Following my own intuition and some testing results, what I ended up doing was firstly placing some constraints on the detected period – namely, assuming that the period shouldn’t halve or double between consecutive pitched frames. Then, between consecutive pitched frames, I weight each autocorrelation value proportionally to how far away that period candidate is from the last detected period – ie, in order for a new period far away from the last one to be chosen, it has to be really significantly periodic.

This algorithm could definitely be improved, I’m sure it’s a bit naive in some ways, but it seems to do pretty well with detecting pitch of a vocal input, which is my main use case ¯_(ツ)_/¯