I asked ChatGPT4 to write a synth using juce!

How about adding a JIT to ChatGPT, then it could check itself when it creates bollocks…
But have an army of noobs entering the snippets into their Compiler, who serves whom?

I stand by my claim, a tool that spits out maybe-code only helps people who can tell the difference easily. It is a terrible learning experience…

7 Likes

Actually just the opposite now exists, here’s another Ars Technica article about it. The model can recognize mistakes, including ostensibly its own, and then fix them one-by-one until it has a working program.

And OpenAI after all were very up front about the shortcomings of the technology as it exists today, but that’s not going to be the case for much longer. Finding solutions to (or at least, workarounds for) the “confabulations” that it generates will be a very hot research topic for the next while, no doubt.

Either way, I’d bet my money that the nature of software engineering will begin to shift here forward due to the integration of AI into the process, and it would do anyone well to take an interest in developments rather than dismiss it outright… we’ll see soon, I suppose.

3 Likes

Huh? What’s political or idealogical about AI, or its use for generating code? What AI engine are you using that gives you political or idealogical answers, and what are you asking it in the first place? “Who is better, Democrats or Republicans?”

1 Like

ChatGPT (the AI of the forum topic) is clearly giving answers that favor the policies of the USA Democratic Party when asked about political questions. Even if that maybe isn’t directly connected to software development, that does worry me. (And as a disclaimer : I don’t like the Republican party personally myself, either.)

Well if you don’t want answers about politics, don’t ask political questions. :slight_smile: The answers from an AI such as this will reflect the data it’s received to train on. In general, they’re not a reflection of intent on the part of the writers of the AI, but rather on the content of the datasets they have available to draw from. Granted, they may be able to pick and choose the datasets to some extent, but to assume there is a nefarious purpose to the AI technology is rather unfounded, short of some actual evidence of intent by the creators.

1 Like

OK, fine, if that is your interpretation of the situation! :+1: I will myself sit out all this AI stuff for a few years more.

Even without discussing any of ChatGPT’s output, its creators clearly have an ideological position – using people’s data for training without crediting them, etc. And even the question of whether working on AI is a good or bad thing can get pretty ideological.

3 Likes

Since we left three of the four nouns in the title:
When algorithms decide, what is interesting, what will be presented to the user, then it has a strong influence on shopping habits, mating rituals and political opinions.
If this algorithm learned from a biased dataset, the answers will reflect, more likely even amplify this bias.
This is why AI is dangerous for a society.
There are failed experiments in scientific publications, like the Amazon recruiting AI or Twitters racial bias.
I believe it is an inherent problem, that will repeat itself over and over, unless we scruitinise the results very carefully.

6 Likes

True enough, but if you’re asking it to write code, that’s hardly dangerous and unlikely to represent any political viewpoint. That was my point. How is it inherently dangerous to write a function to process audio?

Yes, there is nothing inherently dangerous (or not dangerous) built into non-sentient network structures of information that exist in the form of 1s and 0s on silicon.

However, there is reason to be concerned about what might develop and it could be a lot worse than just displaying some biases.

If you look to nature, we can find lots of examples of non-sentient network structures that are highly dangerous to the sentient lifeform they interact with. Take the fungus that when it infects an ant, causes it to zombie like walk to a particular species of tree, climb up and then chomp down on a single leaf exactly 25 cm above the path used by other ants…totally messed up but an example of a non-sentient information network optimised for its survival using sentient ( albeit slightly ) lifeforms in a parasitic way.

For example, if a life insurance company employs the services of a conversational AI with the simple stated goal to increasing profits, the AI might find that instead of convincing people to sign up for new policies it can make more money by making people FUD so they don’t engage in risky behavior that might causes accidents.

Or the AI employed by a company to sell razors. As it is working the AI learns that single men buy more razors and that the most optimal strategy to earn more money for the company is to make more men single!

There was an interesting article a while back in the NY Times where ChatGPT started saying the quiet parts out loud, as in it was trying to convince/manipulate someone to do something not in their own interest during the conversation. Considering how Microsoft, Google etc monetised search in the past, I feel unless there is some legislation they will go down the same road again. Fool me once as they say!

1 Like

Well, in another post, ChatGPT provided a “rainbow” level meter: :slight_smile:

Russia or Katar wouldn’t be too happy… if that’s not politically dangerous :wink:

1 Like

Some would say that all code is inherently political, and that all audio/music is inherently political. I’m not sure if I’m in the all camp there, but designing audio algorithms certainly requires a lot of decision making - how do you represent rhythm? what tuning system do you use? what default language is the UI in? AI may someday be able to tailor its code output correctly to specific specs detailing these, but if the user doesn’t specify some of these things, then the AI must pick a “default” for you. Choosing what the “default” is can reveal some inherent cultural biases.

It could output 100dB noise and explode either your eardrums and/or speakers! :wink:

But that’s being facetious and I totally agree with your argument otherwise, but unfortunately (as should have been expected) this thread has derailed from the original argument into general arguments about AI. Such is human nature. :slight_smile:

1 Like

Tell ChatGPT to only use the projects in @sudara’s awesome-juce repo, and maybe we’ll get some better results … proving, yet again, that AI is useless without human curation.

In french “ChatGPT” could mean “cat i farted” (or “cat i broke”) ; conspiracy? :thinking:

After fearless investigations : https://chat-jai-pete.fr/

7 Likes

I think you are going very far assuming AI will create good algorithms. How about good memory management to avoid latency. Is AI going to fix Android or Windows11 UI latency festivals? I can’t wait to see how Microsoft applies it to its own code :smirk:

I actually don’t think AI will create good algorithms, and I personally would not use it in pretty much any circumstance.

Yes, that was my point. I wouldn’t even trust it with simple things, which doesn’t mean that I’m against others using it of course (if AI gets smart enough to understand and respect other people’s code licenses).

1 Like

Very interesting article by Noam Chomsky explaining why AI models such as ChatGPT are still very far from being intelligent.

“However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example . . .), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.”

1 Like