Should there be a DSP folder inside juce_dsp?

G’day all,

Google Gemini is helping me learn to use JUCE and we’ve struck a problem. Gemini is adamant that there should be a folder called “dsp” inside the folder JUCE/modules/juce_dsp/

It has said it can see it when it downloads version 8.0.7 and that I am downloading a corrupt zip file.

Is Gemini confused or am I doing something wrong?

Cheers.

Tony

These are the correct contents, no DSP folder there

1 Like

Thanks so much for the instant response !

This is what Gemini said …

Now it’s admitted it was wrong. Amazing technology to play with. At the very least it introduced me to JUCE (24 hours ago I had no idea it existed) :slight_smile:

Don’t rely on AI for coding help.

8 Likes

Lesson learned :grinning_face_with_smiling_eyes:
It certainly shows that while AI can create code, if you get stuck you’ll be reliant on it for a fix without knowing how it got there in the first place. As an aid in teaching how to create something yourself, well maybe that’s more useful an application.

There is no RMSLevelSmoother either! I wonder if there is a fork of JUCE out there that has this stuff (although I couldn’t find anything via google or GitHub). What version of Gemini are you using?

1 Like

Gemini admitted that this also didn’t exist. It thought about using what did exist by my evidence and got a working plugin based on that instead. It was 100% convinced the file existed and the folders that contained it. None of which did. It appologised and said it had a flaw in it’s memory which it will fix. It never mentioned it again but it sure did for a hour of me downloading different versions of JUCE. :smiling_face:

Gemini Pro Preview 2.5 05-06 was/is the version I’m using.

AI cannot write code - it has no knowledge.

It presents you with answers based on information that it scraped from the web.
Thus, including incorrect or flaky code.

If you are a bit up-to-date with audio and coding discussions like here, on this forum, it can be funny to read “answers” from chatbots that are based on these same discussions. Answers which can be just (very) wrong. If you report his back to the bot, you will get replies like “Good catch!”. Which is a confirmation that it presented you with nonsense in the first place..

Better to use your own human capacities at learning, being creative and inventive!

Cheers

7 Likes

This. LLM’s do not “understand” anything. The only thing that LLMs can do is predict what words you most expect to see in its response. So it makes sense that it spits out something about “DSP folder”, “RMSLevelSmoother”, etc. It it not even intelligently looking up info about JUCE from the web, it is literally just going “I got asked about audio plugins, so spit out some word salad containing DSP related terms”, and maybe 20% of the time it happens to be somewhat correct.

3 Likes

Just my 2 cents.

If you don’t have your own meta-theory of language, LLMs aren’t fun to use.

If you do, you’ll notice that your intuition and predictions about the system and its actions get better over time, and it gets more difficult to communicate what you’re learning in a concrete manner to people who don’t.

Sure, the LLM does not understand anything. But that doesn’t mean that it’s useless, or just a simple automation tool. You’re projecting your understanding onto the LLM, which constructs a high-dimensional feature over data. This paradigm lets us navigate unknown spaces.

We still don’t understand how language works. What makes us think that we understand the nature of the data being pattern matched?

2 Likes

I don’t actually think they’re capable of thought. I would argue instead that facets of question-answer navigation are mathematically opaque to us, and therefore it would be very easy to enter a semantic argument if we entertained them irresponsibly.

If the purpose of saying that they are thinking and have knowledge is just so that we can justify using them in a world of experts… we can cut the premise entirely and instead say:

difficult-to-understand systems are more useful in the hands of those who don’t feel lost

Experts are still necessary. It just makes more sense to use a heavy wrench than it does to explain to someone several feet away why you’re using it to bash a nail instead of a light hammer.

1 Like

LLMs are truly impressive and can be very useful, but over and over again I have seen them make extremely obvious mistakes. If they indeed are “thinking” (in a very broad, non-sentient sense of the word) it’s certainly not in the way in which we humans do.

This should be even more clear in the case of music. I’m amazed that people use LLMs to create music and expect more than a soulless recombination of existing works. LLMs lack phenomenological experience, they simply cannot “feel” anything. A LLM could never have invented music. It can put together a piece of music randomly, yes, but it cannot at all experience it and understand it the way we do.

If you are a JUCE expert, there are a whole series of mistakes which you, as a human, simply will not not make.

Whether LLMs think depends of course on the how we define thinking, but it is clear to me that they “do things differently” to humans. I think this should at least mitigate the enormous trust people place on LLMs.

This describes it pretty much perfectly. A neural network is a gigantic linear algebraic function that was created and/or guessed (and later semi-automatically changed) with a huge amount of data points were you know the parameters and the correct outcome of that function given the parameters.
In the end a new previously unknown set of input parameters are hopefully calculated to be the correct outcome. If the training actually succeeded in describing the correct patterns (for example the sentence structure with nouns, verbs adjectives and so on) is not only uncertain, it’s impossible to prove.

Doing it this way to understand and output natural language just turned out to be easier, than creating an algorithm that you need to teach what a word is, how a sentence is constructed etc. And now I keep seeing comments like “It admitted of being wrong” implying a somewhat self consciousness, even though one has no idea what was going on in the machine when these words came out.

1 Like

This thread only exists because an LLM repeatedly insisted on several incorrect things about the JUCE framework to a beginner. So the options are, either LLM’s are conscious and have developed a twisted sense of humor, or they don’t in fact “know” anything.

If you want to invest your time into prompt engineering to try and reduce the occurrence of hallucinations in the semi-random word generator output, go right ahead. But I’d rather be a software engineer than a prompt engineer.

8 Likes

Amen to that Ben

5 Likes

I don’t see the need for personal attacks just because other people do not have the same faith in LLMs as you.

(For what it’s worth, randomness plays a large part in the sampling process that LLMs use to generate their answers. That’s not to say randomness is bad, it’s an essential process even in nature.)

3 Likes

The only sheer emotion and hyperbole in this thread is from you,

5 Likes

Interesting to place so much faith & trust in a technology when you don’t understand (or cannot accept) how it works

2 Likes