caffeinated.hermit
Active Member
An LLM is a kind of parrot. It is just parroting data according to its algorithm. The data in a general sense is whatever is available online. So if everyone online is saying that Tommy Robinson is far-right, the LLM will say that Tommy Robinson is far-right. Or if the majority of news sources are saying that Tommy Robinson is far-right, the LLM will parrot that.
Part of the LLM algorithm is to validate the user (and this has often been called AI sycophancy). So you can basically convince the LLM to say whatever you want it to say. This will have no effect on what the LLM will say in the future, unless it is incorporating your past conversation into its data (which it will do if you have an account and remain in the same thread of your previous conversation). If one were to pretend that the LLM is a person, then they would have to pretend that the LLM clones itself every time someone begins a new conversation with it, where no clone knows what any other clone has said. So you may have "convinced" the clone you were talking to that Tommy Robinson is not far-right, but that clone will die after your conversation ends, never to be seen again. The second conversation was interacting with a second clone who had no familiarity with the first clone.
Of course this is all fictional. An LLM is not a person, there is no real cloning occurring, etc.
I'm not sure how this technology actually works. AI LLMs have sometimes pushed very, very weird, specific ideas and suggestions towards people. Not random gibberish or nonsense, but ideas meant to achieve a very negative goal.
Upvote
0