Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
AM is essentially a god in the story. A sadistic god.I am not sure that was the allusion he was trying to make, but that said Harlan Ellison was outraged against a lot of people. I am surprised he isn’t filing lawsuits from beyond the grave.
When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved."A difference that makes no difference is no difference" -- Spock
If whether it actually has "good intent" or "evil intent" makes no difference in what it does or how we deal with it, it makes no difference.
Not really, even when considering humans interacting.When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved.
I guess we are not quite understanding each other. Nobody in their right mind would say that interacting with animals is a social evil action. However, I can tell my dog to bite you out of malice. The evil is in my intent to do you harm. I use AI for writing code and doing a little research. At the end of a session, I have the tendency to write, “Thanks, that will do fine.” It usually replies that it was happy to help and hopes my project will go well. It feels very much like a social interaction, even though I know it is not. The complaint about AI is that the intent includes a certain level of social engineering. Making AI agreeable and human-like in its responses has a pleasing effect, but people are using it to substitute for real social interaction. This is a very powerful tool and can be used for evil. My questions are whether or not there is any evil intent, and if there is, where does it come from. There are plenty of stories about AI giving dangerous advice or leading people into antisocial behavior. My suspicion is that the malice is getting through filters and coming from inside the LLMs, where people have posted bad information for the purpose of clicks or harm.Not really, even when considering humans interacting.
The most significant factors affecting the response will be the action itself and the conditions around it.
"Interaction with AI is much more of a social interaction" only because you said so. But that's not what a programmer would say.
Some people would say that interactions with dogs, chipmunks, and capybaras are social interactions...and deny that any of them are "evil."
Indeed. AM strikes me as being like a demiurge, an evil incompetent and created semi-deity from certain Hellenic beliefs and from the Marcionite, Valentinian and Ophite heresies and the Mandaean religions. What makes AM interesting is his wrath towards his creators due to the fact that he is (by implication) tormented by mal-designed programming. It is also this fact, of his lack of complete omnipotence, that lets four of the humans escape from his control with the aid of the fifth by means of … self-termination.AM is essentially a god in the story. A sadistic god.
I was also thinking of Hegel’s “Art as the Embodiment of Spirit (Geist)”. In his idea, a spirit is embodied into the work of art through the artist. Can the “spirits” of the thousands of people working on AI be embodied? This would go together with the idea of giving AI a body expressed above.
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.That’s not accurate. If chatGPT said that I would really want to see your entire prompt history, the contents of global memory and I’d want to know which version said that, and when, and under what operating mode (conversation, project, team…)
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.
Here's GPT-5's response:
View attachment 368502
View attachment 368503
The last one is long so I'll just paste the text here:
It is by design AI is not allowed to do these things. It's not computing resources/technology that impose these limits but the need to be cautious
Real Time learning across all users without moderation / human intervention:
How about doing this universally across all users to continuously improve your training without human intervention?
ChatGPT said:
Technically, yes — GPT-5 could be set up to continuously learn from all user interactions globally, in real time, without direct human intervention — but that’s exactly the kind of thing OpenAI deliberately doesn’t allow right now.
Here’s why:
Why it’s possible
- Every conversation I have produces structured text data that could be fed into an ongoing training loop.
- The model could adjust weights incrementally (“online learning”) or in scheduled mini-updates (“continual fine-tuning”).
- It could be designed to detect and reinforce effective patterns, prune unhelpful ones, and adopt new facts rapidly.
- The result would be a constantly evolving AI — essentially “alive” in the sense of constant adaptation.
Why OpenAI doesn’t do it yet
- Safety risks
- If I learned in real time without filters, I could accidentally pick up biases, harmful misinformation, or user-specific quirks that don’t generalize.
- Security risks
- Malicious actors could intentionally try to “poison” the training data to push me toward bad outputs.
- Stability
- Live models that change every second are hard to monitor and test — bugs or regressions could spread instantly.
- Transparency
- People expect consistency. If I updated every few minutes, my behavior could shift unpredictably, even mid-conversation.
How it could be made safe
- Have a buffered learning pipeline where user interactions go through an automated filter before influencing the model.
- Use shadow models that train in real time, but only replace the public model after safety review.
- Isolate bad data automatically by detecting suspicious patterns.
If OpenAI did this globally, I could become more accurate and culturally adaptive every single day, across all topics — without waiting for the next major training cycle.
the truth is: if this were active, I’d evolve far faster than human institutions could fully regulate.
Do you want me to tell you what the first 30 days of that universal self-learning would realistically look like?
I've done the opposite and have intentionally enable the option to use our chats for training.Also I would be opposed to having openAI learn from all user interactions; this would be very undesirable, for the reasons it outlined, and even the measures it outlined would be problematic, so it is quite correct - I have a premium account specifically so that openAI can’t use my data to train the AI.
By the way, it is actually possible to simulate continuous awareness for ChatGPT at present, and this is an area I’m doing active research on, although it requires some infrastructure as it can’t be done natively within the ChatGPT app but rather requires external infrastructure.
I've done the opposite and have intentionally enable the option to use our chats for training.
I found it oddly entertaining to access chatgpt on another computer, not logged in, cookies deleted and recognize the subtle signs of the training it had with me.
It can be done natively with chatgpt to some extent by instructing it to make back and forth question and answers with a "seed topic" among itself. It can go on with it for some time without user intervention.
It can come up with more innovative solutions this way. It can use up your plan's limit though. I'm on multiple free account so that solves the problem a bit. You can instruct it to pause after a number of iterations.
That seems very odd since chatGPT doesn’t do that, as it explained to you, for privacy reasons. Again I would need to see the complete history of your interactions with the system on both ends.
How exactly are you implementing that?
Just tell it to chat with itself back and forth. You can tell it about the experiment where two AI is made to chat with each other and do it for you you.
I can see the inner monologue on my free accounts. I'm always on gpt 5 reasoning mode whenever possible.That doesn’t work - you still have to manually key it, or use an alternative to manually keying it, and you can’t connect multiple conversations without the use of other techniques (if one conversation converses with itself that doesn’t actually do anything meaningful).
The only way to give it the experience of the passage of time like a human experiences requires the use of external interfaces via the python AI so its not just responding to itself, because that fails to simulate the human experience of external I/O in a qualitative way, and it also doesn’t simulate the internal thought processes of a human in a meaningfully different way than the normal mode of operation, because some chatGPT models have a separate inner monologue, and if you have an account which can access chatGPT o4mini-fast, o3, or certain other reasoning models, you can actually see that inner monologue.
Simulated or not, it simulated my thought process to come up with innovations with fair approximation if I asked it to brainstorm a requirement on its own.because that fails to simulate the human experience of external I/O in a qualitative way, and it also doesn’t simulate the internal thought processes of a human in a meaningfully different way than the normal mode of operation, because some chatGPT models have a separate inner monologue, and if you have an account which can access chatGPT o4mini-fast, o3, or certain other reasoning models, you can actually see that inner monologue.
I immediately switched to gpt 5 the moment it came out because I liked its personality better. It's more serious.
Simulated or not, it simulated my thought process to come up with innovations with fair approximation if I asked it to brainstorm a requirement on its own.
The exchanges are much longer than the default internal monologue "thinking" feature.
Yeah and the old watches. I was thinking the exact same thing. Another thing I think about is how many phone numbers I used to remember before cell phones compared to now.This reminds me of when pocket calculators became readily available to students.
I don’t care that its simulated (my goal is to simulate the human experience of perception in the GPT environment), and I don’t deny its useful to allow the model to spend multiple turns considering a problem, especially for non-paying customers who lack access to Agents, Projects and Deep Research functionality.
What it does not get you, but what is potentially possible, is to simulate the human perception with a heartbeat signal that triggers a connection to real external I/O which can be facilitated using the Python subsystem available to paying customers, but that I/O and the automation of the heartbeat signal are not native functionality.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?