I’m trying to cross synthesise two samples (in a plug-in). I can hear some cross-synthesis - but it sounds wrong compared to a working example I made in OpenFrameworks.

I’ve uploaded both results on Soundcloud for comparison:

Examples

Both samples are 44100 sample rate, stereo.

My method goes like this:

Initialise FFT with an order of 9 (2^9 = 512)

Initialise hann window (512-long)

Read sample 1 and sample 2 into 512-long buffers. (I read the left and right channel into separate buffers to avoid confusing myself)

Apply hann window to the buffers.

Using zeromem and memcpy put the audio data in arrays 512 * 2 size because the FFT function requires getSize*2 input array.

Perform forward FFTs of the audio data.

Resullting arrays contain 512*2 values.

I convert both sets of complex FFT values from cartesian to polar values i.e.:

magnitude = sqrt(a^2 + b^2)

phase = atan2(b, a)

The Juce documentation states that the real and imaginary values of an realOnlyFFT array are interleaved. Hence the magnitudes and phase will be interleaved.

Finally multiply the magnitudes of sample 1 by the phase of sample 2 (this converts back to cartesian values):

```
newArray[index_1] = sample1[magnitude_1] * cos(sample2[phase_1])
newArray[ndex_2] = sample1[magnitude_1] * sin(sample2[phase_1])
```

The new array is then fed into the inverse FFT function and added to the buffer using addFrom. (that is, I addFrom, for left, and for right channel)

If anyone has any experience with this kind of operation I would love to hear from you.