New Policy: No Content Generated by Artificial Intelligence

Hi all,

We have added a new section to the community guidelines that prohibits posting content that is the product of generative AI.

Content Generated by Artificial Intelligence

Any content posted to this forum that is determined to be the output of generative artificial intelligence (large language models (LLMs), ChatGPT, Claude, Gemini, DeepSeek, Grok, Cursor, CoPilot, etc.) may be deleted without notice. Please do not use LLMs to help create content for this forum. Doing so will constitute unacceptable behaviour as set out in our code of conduct.

This forum is built upon the contributions of people within the audio development community, and its value as a resource of the community is that the content posted on the forum is predominantly thoughtful, correct and based on experience.

If the content of the forum becomes largely the output from LLMs it will cease to provide any significant value to the community.

Anyone who wishes to obtain input from an LLM can query an LLM directly.

We understand that there may be situations where there are genuinely interesting and useful questions about the output of LLMs, and the JUCE team will use their discretion in moderating such content.

Although this policy is new, the JUCE team has already been deleting LLM-generated content without advertising that we are doing so. This policy simply formalises the process.

44 Likes

I mostly agree, but I’m pretty sure we should take Copilot out of this rule (except for Copilot Chat), because its workflow leads to AI-generated content that isn’t harmful to community standards imo. I mean think of it, all you want is to surpress people dropping 100 lines of unedited and unreflected code, making us all look at AI slop, right? But that is not how Copilot works. It makes very small suggestions, usually one-liners. And it also reacts to your personal style. It might suggest a line that is not in your style sometimes, but then you change that line manually and it will continue to use your style again then. So it’s not the kind of AI that floods the web with reinforced bad stereotypes and the code is typically just as tested as 100% organically written code.

Juce team, thank you for reacting. I think this is not just about code but also about AI starting new threads (with mostly stupid questions) and thus lowering the overall forum quality. In recent months some new threads looked very suspicious and I think some have been removed.

8 Likes

Well, it’ll be interesting to see how you moderate this. I know many professional devs, myself included, that have adopted AI as a standard tool in their development processes. And this trend is only ever going to increase rapidly as time goes on. I certainly know that many of the posts in the forum regarding coding and JUCE/DSP practices can now be expertly solved using AI.

On the other hand, completely AI generated posts resulting in the forums turning to mush is an issue so fair play for actually being aware of the potential downsides.

3 Likes

Thanks Tom, love you
mwah~~

1 Like

“I felt a great disturbance, as if millions of vibe-coders suddenly cried out in terror.”

Jokes aside I think this is a good policy :+1:

9 Likes

I’m sure there are good reasons for this, but this policy seems quite crude heavy handed as it is written. Informing users that AI slop will be taken down is one thing, but banning them from using AI altogether is another. I’m sure that there are legitimate reasons to use AI in this forum, such as:

  • translation for non-native English speakers
  • quoting an AI discussion that is relevant and helpful for the topic in some way
  • beginners using AI to learn and then turning to the forum for help
  • humour

I have two suggestions for you to consider:

  1. Make a distinction in the policy between legitimate and illegitimate ways of using AI in forum threads. Granted this is not an easy distinction to make, but I think that even a simplistic description might be better than the current policy.
  2. Ask people to label AI generated content, and say that AI content which is unmarked may be taken down.
4 Likes

not even code solution generated by AI?

How can you determine if the snippet i post is genuinely human or not ?

1 Like

This, I fear, is the crux of this backwards policy.

Definitely agree with this being a better solution than a censorship regime.

2 Likes

I think, if you curated, corrected and even tested the output from AI, then nobody could complain about how you got there.
But if the posts are simply echoing the output from AI, the posters might as well run the AI themselves. So the output of an AI does not contribute any value to the discussion.
Just my 2 pennies.

20 Likes

Part of this should be about discouraging new coders from going straight to AI tools and encouraging them to engage with and learn from their piers.

If someone new to JUCE and/or coding asks a question on the forums and the answer they get is “you could have just asked ChatGPT” they’re likely to do just that for their next question and so never come back to the forum.

Not only does that impact this community, but it also isn’t fair to those new coders as they likely won’t have learned the necessary skills to analyse, review, and scrutinise AI-generated code to gage whether the response is a hallucination or not.

7 Likes

Not all AI responses are hallucinatory, and it can be a huge boost for a new programmer to get all the scaffolding done for them. But of course, they would have to know to ask for that properly.

In any case, AI is definitely here to stay. It is unwise to ignore it or degrade its use…

@t0m firstly, what is the motivation here?
Secondly, how would you enforce it?
Genuinely interested.

The real argument to be made here is that epistemic luck undermines expertise, something this forum provides the average developer direct access to.

Should we also ban wrong answers then ? They mislead the community and provide garbage to more LLMs

1 Like

I think wrong answers that stem from honest misunderstandings have the capacity to be corrected in ways that are beneficial to the community.

It can be quite difficult to judge a situation when AI is involved, and that ambiguity can be counterproductive.

4 Likes

I personally think so. It doesn’t matter where the wrong answer comes from. If it’s wrong, then it’s not helpful, not matter how well the poster meant.

If I was asking a question here, I would undoubtedly prefer an expert with knowledge and expertise provides an answer, and not some best guess.

It’s really problematic if a wrong answer has been given that seems correct on the surface and now everybody else, including JUCE staff, thinks the matter is resolved and can be ignored.

2 Likes