ChatGPT (the AI of the forum topic) is clearly giving answers that favor the policies of the USA Democratic Party when asked about political questions. Even if that maybe isn’t directly connected to software development, that does worry me. (And as a disclaimer : I don’t like the Republican party personally myself, either.)
Well if you don’t want answers about politics, don’t ask political questions. The answers from an AI such as this will reflect the data it’s received to train on. In general, they’re not a reflection of intent on the part of the writers of the AI, but rather on the content of the datasets they have available to draw from. Granted, they may be able to pick and choose the datasets to some extent, but to assume there is a nefarious purpose to the AI technology is rather unfounded, short of some actual evidence of intent by the creators.
OK, fine, if that is your interpretation of the situation! I will myself sit out all this AI stuff for a few years more.
Even without discussing any of ChatGPT’s output, its creators clearly have an ideological position – using people’s data for training without crediting them, etc. And even the question of whether working on AI is a good or bad thing can get pretty ideological.
Since we left three of the four nouns in the title:
When algorithms decide, what is interesting, what will be presented to the user, then it has a strong influence on shopping habits, mating rituals and political opinions.
If this algorithm learned from a biased dataset, the answers will reflect, more likely even amplify this bias.
This is why AI is dangerous for a society.
There are failed experiments in scientific publications, like the Amazon recruiting AI or Twitters racial bias.
I believe it is an inherent problem, that will repeat itself over and over, unless we scruitinise the results very carefully.
True enough, but if you’re asking it to write code, that’s hardly dangerous and unlikely to represent any political viewpoint. That was my point. How is it inherently dangerous to write a function to process audio?
Yes, there is nothing inherently dangerous (or not dangerous) built into non-sentient network structures of information that exist in the form of 1s and 0s on silicon.
However, there is reason to be concerned about what might develop and it could be a lot worse than just displaying some biases.
If you look to nature, we can find lots of examples of non-sentient network structures that are highly dangerous to the sentient lifeform they interact with. Take the fungus that when it infects an ant, causes it to zombie like walk to a particular species of tree, climb up and then chomp down on a single leaf exactly 25 cm above the path used by other ants…totally messed up but an example of a non-sentient information network optimised for its survival using sentient ( albeit slightly ) lifeforms in a parasitic way.
For example, if a life insurance company employs the services of a conversational AI with the simple stated goal to increasing profits, the AI might find that instead of convincing people to sign up for new policies it can make more money by making people FUD so they don’t engage in risky behavior that might causes accidents.
Or the AI employed by a company to sell razors. As it is working the AI learns that single men buy more razors and that the most optimal strategy to earn more money for the company is to make more men single!
There was an interesting article a while back in the NY Times where ChatGPT started saying the quiet parts out loud, as in it was trying to convince/manipulate someone to do something not in their own interest during the conversation. Considering how Microsoft, Google etc monetised search in the past, I feel unless there is some legislation they will go down the same road again. Fool me once as they say!
Well, in another post, ChatGPT provided a “rainbow” level meter:
Russia or Katar wouldn’t be too happy… if that’s not politically dangerous
Some would say that all code is inherently political, and that all audio/music is inherently political. I’m not sure if I’m in the all camp there, but designing audio algorithms certainly requires a lot of decision making - how do you represent rhythm? what tuning system do you use? what default language is the UI in? AI may someday be able to tailor its code output correctly to specific specs detailing these, but if the user doesn’t specify some of these things, then the AI must pick a “default” for you. Choosing what the “default” is can reveal some inherent cultural biases.
It could output 100dB noise and explode either your eardrums and/or speakers!
But that’s being facetious and I totally agree with your argument otherwise, but unfortunately (as should have been expected) this thread has derailed from the original argument into general arguments about AI. Such is human nature.
Tell ChatGPT to only use the projects in @sudara’s awesome-juce repo, and maybe we’ll get some better results … proving, yet again, that AI is useless without human curation.
In french “ChatGPT” could mean “cat i farted” (or “cat i broke”) ; conspiracy?
After fearless investigations : https://chat-jai-pete.fr/
I think you are going very far assuming AI will create good algorithms. How about good memory management to avoid latency. Is AI going to fix Android or Windows11 UI latency festivals? I can’t wait to see how Microsoft applies it to its own code
I actually don’t think AI will create good algorithms, and I personally would not use it in pretty much any circumstance.
Yes, that was my point. I wouldn’t even trust it with simple things, which doesn’t mean that I’m against others using it of course (if AI gets smart enough to understand and respect other people’s code licenses).
Very interesting article by Noam Chomsky explaining why AI models such as ChatGPT are still very far from being intelligent.
“However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example . . .), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.”
Chomsky had 60+ years to develop his theories and a lot of effort has been spent by him and his followers on formalizing grammar and on creating idealized, symbolic approaches to cognition. But nothing that emerged from this research has come even remotely close to the progress of approaches inspired by neural networks that we are seeing now. Despite enormous amounts of money spent into this area of research, the results of symbolic AI are not in an infant, but in fact have never left the amoebia stage. It should be clear by now that this approach leads nowhere. Chomsky has certainly had his chance. He would never admit that his thoughts have been wrong even if faced with one of the next releases of neurally inspired systems that may pass the Turing test.
Developing symbolic AI was never Chomsky’s field of work. Rather it is undestanding and modelling how we, humans, acquire a language, a process which is clearly diffent from the way something like ChatGPT is trained. A child, for example, is exposed to a truly minuscule amount of data by comparison. I therefore don’t see how the success of LLMs has any bearing on the validity of Chomsky’s work and his approach to cognition.
In the essay Chomsky bases himself on such fundamental differences to make his points. Whether his arguments are sound can of course be discussed (I, for instance, don’t think he provides particularly good examples), but I’m afraid you merely misrepresent his carreer. With regard to symbolic AI Chomsky would probably claim that as long as we don’t fully understand how we humans acquire a language, an essential source of insight will be missing from all AI engineering efforts.
I think this is an embarrassingly bad take from Chomsky.
I’m getting relatively old myself, and sometimes find it hard to accept significant change. But I’ve also been thinking about AI for a couple decades and for me it has -always- come back to a model directly in line with how LLMs operate. No matter how you pose a theory of mind, you can’t escape the web of correlations between symbols, correlations which can be recursively seen as symbols themselves. We’ve known for decades that the ground of physical reality as we know it is intrinsically probabilistic. I don’t see any reason to expect thought is any different.
Reason / logic are likely more akin to cosmic filaments – emergent patterns in a vast universe of relational probabilities. But people like Chomsky may never let go of what IMO amounts to a dated Newtonian view of the mind, because they’re simply too deeply invested.