I asked ChatGPT4 to write a synth using juce!

I was having some fun in the past few days with ChatGPT4 and asked it to write a 4 voice synth using JUCE, with a basic oscillator and filter.

Rather than pollute the forum with the entire result, here is what it did for the editor. All I prompted it to do was “Can you now create the plugin editor for the synth”

class SynthAudioProcessorEditor : public juce::AudioProcessorEditor
{
public:
    SynthAudioProcessorEditor(SynthAudioProcessor& processor)
        : juce::AudioProcessorEditor(&processor), audioProcessor(processor)
    {
        // Oscillator waveform ComboBox
        waveformSelector.addItemList(processor.waveformChoices, 1);
        waveformSelector.onChange = [this] { audioProcessor.oscillatorWaveformParam->beginChangeGesture(); audioProcessor.oscillatorWaveformParam->setValueNotifyingHost((float)waveformSelector.getSelectedId() - 1); audioProcessor.oscillatorWaveformParam->endChangeGesture(); };
        waveformSelector.setSelectedId(processor.oscillatorWaveformParam->getIndex() + 1);
        addAndMakeVisible(waveformSelector);
        // Filter cutoff Slider
        filterCutoffSlider.setSliderStyle(juce::Slider::RotaryVerticalDrag);
        filterCutoffSlider.setRange(20.0f, 20000.0f, 1.0f);
        filterCutoffSlider.setValue(processor.filterCutoffParam->get());
        filterCutoffSlider.onValueChange = [this] { audioProcessor.filterCutoffParam->setValueNotifyingHost(filterCutoffSlider.getValue()); };
        addAndMakeVisible(filterCutoffSlider);
        // Filter resonance Slider
        filterResonanceSlider.setSliderStyle(juce::Slider::RotaryVerticalDrag);
        filterResonanceSlider.setRange(1.0f, 10.0f, 0.1f);
        filterResonanceSlider.setValue(processor.filterResonanceParam->get());
        filterResonanceSlider.onValueChange = [this] { audioProcessor.filterResonanceParam->setValueNotifyingHost(filterResonanceSlider.getValue()); };
        addAndMakeVisible(filterResonanceSlider);
        // Set the editor window size
        setSize(500, 600);
    }
    void resized() override
    {
        // Position the controls
        waveformSelector.setBounds(20, 20, 150, 30);
        filterCutoffSlider.setBounds(200, 20, 100, 100);
        filterResonanceSlider.setBounds(350, 20, 100, 100);
    }
private:
    SynthAudioProcessor& audioProcessor;
    // Parameter controls
    juce::ComboBox waveformSelector;
    juce::Slider filterCutoffSlider;
    juce::Slider filterResonanceSlider;
};
2 Likes

that sucks tbh. parameters and all their properties should be owned by the processor and the editor gets the information via attachments. all this manually typing in the range and default value stuff in the editor should be avoided and chatGPT only does it because it has too much noob code in its training data, probably from reading a lot of github pages of free plugin projects.

chatGPT, if you read this, do better!

4 Likes

“Can you use attachments” and pasted in the above code…below is the answer.

Yes, you can use juce::AudioProcessorValueTreeState::SliderAttachment and juce::AudioProcessorValueTreeState::ComboBoxAttachment to automatically link the UI components to their respective parameters in the SynthAudioProcessor. This approach simplifies the process of synchronizing the UI and processor parameters.

First, you need to include the necessary header:

#include <JuceHeader.h>

Then, modify the SynthAudioProcessorEditor class to use attachments:

class SynthAudioProcessorEditor : public juce::AudioProcessorEditor
{
public:
    SynthAudioProcessorEditor(SynthAudioProcessor& processor)
        : juce::AudioProcessorEditor(&processor), audioProcessor(processor)
    {
        // Oscillator waveform ComboBox
        waveformSelector.addItemList(processor.waveformChoices, 1);
        waveformSelector.setSelectedId(processor.oscillatorWaveformParam->getIndex() + 1);
        addAndMakeVisible(waveformSelector);
        waveformAttachment = std::make_unique<juce::AudioProcessorValueTreeState::ComboBoxAttachment>(audioProcessor.parameters, "oscillatorWaveform", waveformSelector);

        // Filter cutoff Slider
        filterCutoffSlider.setSliderStyle(juce::Slider::RotaryVerticalDrag);
        filterCutoffSlider.setRange(20.0f, 20000.0f, 1.0f);
        filterCutoffSlider.setValue(processor.filterCutoffParam->get());
        addAndMakeVisible(filterCutoffSlider);
        filterCutoffAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(audioProcessor.parameters, "filterCutoff", filterCutoffSlider);

        // Filter resonance Slider
        filterResonanceSlider.setSliderStyle(juce::Slider::RotaryVerticalDrag);
        filterResonanceSlider.setRange(1.0f, 10.0f, 0.1f);
        filterResonanceSlider.setValue(processor.filterResonanceParam->get());
        addAndMakeVisible(filterResonanceSlider);
        filterResonanceAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(audioProcessor.parameters, "filterResonance", filterResonanceSlider);

        // Set the editor window size
        setSize(500, 600);
    }

    // ...

private:
    SynthAudioProcessor& audioProcessor;

    // Parameter controls
    juce::ComboBox waveformSelector;
    juce::Slider filterCutoffSlider;
    juce::Slider filterResonanceSlider;

    // Attachments
    std::unique_ptr<juce::AudioProcessorValueTreeState::ComboBoxAttachment> waveformAttachment;
    std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> filterCutoffAttachment;
    std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> filterResonanceAttachment;
};

This code creates attachments for each UI component and connects them to the corresponding parameters in the SynthAudioProcessor. The parameter identifiers (“oscillatorWaveform”, “filterCutoff”, “filterResonance”) should match the ones used when creating the parameters in the SynthAudioProcessor class.

Make sure the SynthAudioProcessor has an AudioProcessorValueTreeState object named parameters to store the parameters and their values.

I think the mind blowing thing is that you can tell it a very non-explicit sentence like “use attachments” and it can infer you mean ParameterAttachments and then off it goes and spits out the class in around 5 seconds!

1 Like

I believe ChatGPT is only updated with data up to the year 2021

Rail

it’s better, but the AI doesn’t get rid of the error-prone lines. and without a human being even seeing the mistake it would have stayed unnoticed for sure. chatGPT is nice and all, but real communities are more helpful, even if they can get annoyed when asking too many questions sometimes

4 Likes

I can’t wait for the trash pile of software releases coming.

10 Likes

I’ve been using CoPilot recently. It’s somewhat useful, is able to quite effectively stop me having to write the sort of boilerplate that always end up writing, reducing a fair bit of my day to reading the suggestion and pressing tab. Sometimes I have to correct small mistakes it makes, but other times it’s literally hallucinating imaginary JUCE classes/methods that would make sense in the context, if they actually existed! :joy:

A non-programmer friend also showed me something he asked ChatGPT to create involving some FFT. It wrote something that looked plausible and would have compiled I’m sure, but there was a for loop in it that literally summed up a bunch of values and then discarded the result. I suppose the optimiser would have at least removed that code that was doing absolutely nothing other than burning CPU cycles. :wink:

The TL;DR of this is whilst these tools might be useful, they can’t be blindly trusted yet and @Fandusss has likely hit the nail on the head; software might be heading the way music has where there is a ton of shit to sift through before finding anything decent that is from new players in the field.

6 Likes

I was playing around the other day with asking ChatGPT to write some DSP for me.

I asked it to write a process() function that would make my vocals warmer and it gave something that added a low-pass filter and a simple waveshaper. Not the most exciting, but certainly a good start for a basic saturation effect.

Github Copilot is cool. It knows about the most recent ways to write JUCE code and you don’t have to actively ask it to do things. You can just hold on and check out what it offers. Boilerplate code it solves like 90% of the time without a mistake. Like when you have to write a toString method for an enum again. Annoying stuff that we shouldn’t really have to do, is now gone for good. When it comes to more complex tasks where real decisions have to be made, it’s a bit of a hit and miss, but still better than figuring out everything on your own

1 Like

My chatGPT experience was slightly different. I presented code that I had written and asked chatGPT to make suggestions to improve the code. It did a fairly good job of that, often making simple suggestions that were, indeed, improvements. But I also ran into it hallucinating non-existent functions. And, when asked to produce the non-existent functions, it was not able to do so.

So, it is a mixed bag for sure.

This was chatGPT 3.5. With the improvements in chatGPT 4.0, some of these issues are bound to go away. And the prospect of what some future chatGPT 5.0 will be capable of is mind-boggling.

If you haven’t tried CoPilot yet then give it a go. I’m only a week into using it and still in the “this is completely insane voodoo magick” stage. As mentioned above it does come up with some nonsense, but the vast majority of the time it’s saving me so much time with the suggestions, even if they do need to be occasionally corrected.

The only bummer really is that you can’t use it with Xcode, so I made the switch to CLion (which I also highly recommend over Xcode now that I’ve got my head around it, and having CMake as a first class citizen is also great; I’ll finally make the effort to switch all my projects, or at least make any new things using it instead of Projucer).

in this video github copilot writes the entire parameter layout function, correctly identifying that it needs to make the parameters for cutoff and q because there is a dsp::Filter in the header

1 Like

Interesting times!

I can also imagine that in the not too distant future you can pay for chatgpt instance that you can train on your own private repo. You will have to consider how easy it is for chatgpt to learn your code and adjust accordingly. At this point I don’t know if it learning from comments or just the code or both.

Can imagine also it will be great for code reviews, which are generally a pain in the ass. I like the idea of using chatgpt in a similar way to pair programming.

2 Likes

Interesting indeed!

I can imagine that some time after you trained the chatgpt om your repo, I or someone else ask it to make a plugin that sounds just as awsome as one of gavinburke’s. Who owns that code? It might not be a direct copy from your repo, but something that sounds similar, possibly in the style of mine or someone elses.

@oxxyyd I can imagine that large companies with a lot of IP that they spent time developing will require and pay for a custom instance that is trained on their sources and only available to members of their team.

Obviously with Microsoft involved we won’t be getting any open source gpt4!

I asked chatgpt to “Can you write a pitch detection algorithm based on the yin method”. I can see the issue you described @oxxyyd as I’m not sure where this algorithm has come from and what the license is, if any.

Answer:

Sure! The YIN algorithm is a widely used method for pitch detection. Here’s an implementation of the AudioFilePitchDetector class that uses the YIN algorithm for pitch detection:

#include <JuceHeader.h>

struct PitchedAudioSample
{
    float pitch;
    juce::AudioBuffer<float> audioSample;
};

class AudioFilePitchDetector
{
public:
    AudioFilePitchDetector()
    {
    }

    juce::Result loadAudioFile(const juce::File& file, PitchedAudioSample& outSample)
    {
        std::unique_ptr<juce::AudioFormatReader> reader(formatManager.createReaderFor(file));

        if (reader == nullptr)
        {
            return juce::Result::fail("Failed to create an AudioFormatReader for the given file.");
        }

        juce::AudioBuffer<float> audioBuffer(reader->numChannels, reader->lengthInSamples);
        reader->read(&audioBuffer, 0, reader->lengthInSamples, 0, true, true);

        float pitch = detectPitch(audioBuffer);
        if (pitch < 0)
        {
            return juce::Result::fail("Failed to detect the pitch of the audio file.");
        }

        outSample.pitch = pitch;
        outSample.audioSample = std::move(audioBuffer);

        return juce::Result::ok();
    }

private:
    float detectPitch(const juce::AudioBuffer<float>& buffer)
    {
        float pitch = -1.0f;
        float bestConfidence = FLT_MAX;

        for (int channel = 0; channel < buffer.getNumChannels(); ++channel)
        {
            const auto* channelData = buffer.getReadPointer(channel);
            int numSamples = buffer.getNumSamples();

            float yinBuffer[numSamples / 2] = { 0 };
            float runningSum = 0.0f;

            for (int tau = 1; tau < numSamples / 2; ++tau)
            {
                for (int i = 0; i < numSamples / 2; ++i)
                {
                    float delta = channelData[i] - channelData[i + tau];
                    yinBuffer[tau] += delta * delta;
                }

                runningSum += yinBuffer[tau];

                if (runningSum > 0)
                {
                    yinBuffer[tau] *= tau / runningSum;
                }

                if (tau > 1 && yinBuffer[tau] < yinBuffer[tau - 1] && yinBuffer[tau] < bestConfidence)
                {
                    pitch = static_cast<float>(getSampleRate()) / tau;
                    bestConfidence = yinBuffer[tau];
                }
            }
        }

        return pitch;
    }

    double getSampleRate() const
    {
        // You can adjust the sample rate based on the specific audio file, or set it to a fixed value.
        return 44100.0;
    }

    juce::AudioFormatManager formatManager
    {
        []()
        {
            juce::AudioFormatManager manager;
            manager.registerBasicFormats();
            return manager;
        }()
    };
};

This version of AudioFilePitchDetector uses the YIN algorithm, which is more accurate than the previous pitch detection method. The main changes are in the detectPitch method, which now calculates the pitch using the YIN algorithm.

To use this class in your JUCE application, create an instance of AudioFilePitchDetector, call loadAudioFile with an audio file, and process the PitchedAudioSample as needed.

You didn’t ask him (or her, does the chatgpt have a gender by the way) -Did you think up that yourself, or did you just copied it from someone? :smirk:

1 Like

it shouldn’t matter who chatGPT and copilot is stealing from. it is like sampling in music production, where you do take material from someone else, but change it and its context so much that it’s not a comparable product anymore.

for example in the parameter layout code of my video there are some lines where the created parameter pointers are pushed back to that vector and when i saw that i was like ‘huh? why didn’t it just directly create them into the vector? i wouldn’t have written it like that, but ok’

a week later i was randomly watching other people’s youtube videos and noticed this was exactly dr bruisin’s style of writing the parameter layout function.

now considering his code was licensed in some way: would it matter, even if people identified the pattern? it doesn’t make a difference technically if the code is written in one way or another. i would have written something very similiar anyway. what would be an actual humanly understandable and non-juristical argument for this practice being unfair towards anyone?

I think it’s a real grey area tbh. For stuff like the parameter layout code there are only so many ways you can skin that particular cat, and for a lot of things it would be likely if you asked two coders locked in separate boxes to produce something trivial that they would churn out identical code (we’ll gloss over the morality of locking coders in boxes :joy:). CoPilot has a setting to “block publically available code”, but the FAQ is clear as mud (to me at least) about exactly what this means.

The issue with this analogy is there is clear legal precedent that says if you take someone else’s copyrighted music and use any part of it and wish to distribute your creation, regardless of how much you change things, you have to get permission, and possibly/probably pay for that right to re-use.

But then we get into the arguments about how the source material is not directly encoded into the models, and how humans learn from seeing others work.

Like I said: grey area.

1 Like