• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI is not the problem. We are

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,142
22,742
US
✟1,732,532.00
Faith
Christian
Marital Status
Married
I am not sure that was the allusion he was trying to make, but that said Harlan Ellison was outraged against a lot of people. I am surprised he isn’t filing lawsuits from beyond the grave.
AM is essentially a god in the story. A sadistic god.
 
  • Agree
Reactions: The Liturgist
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
677
236
Brzostek
✟41,039.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
"A difference that makes no difference is no difference" -- Spock

If whether it actually has "good intent" or "evil intent" makes no difference in what it does or how we deal with it, it makes no difference.
When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,142
22,742
US
✟1,732,532.00
Faith
Christian
Marital Status
Married
When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved.
Not really, even when considering humans interacting.

The most significant factors affecting the response will be the action itself and the conditions around it.

"Interaction with AI is much more of a social interaction" only because you said so. But that's not what a programmer would say.

Some people would say that interactions with dogs, chipmunks, and capybaras are social interactions...and deny that any of them are "evil."
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
677
236
Brzostek
✟41,039.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
Not really, even when considering humans interacting.

The most significant factors affecting the response will be the action itself and the conditions around it.

"Interaction with AI is much more of a social interaction" only because you said so. But that's not what a programmer would say.

Some people would say that interactions with dogs, chipmunks, and capybaras are social interactions...and deny that any of them are "evil."
I guess we are not quite understanding each other. Nobody in their right mind would say that interacting with animals is a social evil action. However, I can tell my dog to bite you out of malice. The evil is in my intent to do you harm. I use AI for writing code and doing a little research. At the end of a session, I have the tendency to write, “Thanks, that will do fine.” It usually replies that it was happy to help and hopes my project will go well. It feels very much like a social interaction, even though I know it is not. The complaint about AI is that the intent includes a certain level of social engineering. Making AI agreeable and human-like in its responses has a pleasing effect, but people are using it to substitute for real social interaction. This is a very powerful tool and can be used for evil. My questions are whether or not there is any evil intent, and if there is, where does it come from. There are plenty of stories about AI giving dangerous advice or leading people into antisocial behavior. My suspicion is that the malice is getting through filters and coming from inside the LLMs, where people have posted bad information for the purpose of clicks or harm.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,631
8,243
50
The Wild West
✟764,656.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
AM is essentially a god in the story. A sadistic god.
Indeed. AM strikes me as being like a demiurge, an evil incompetent and created semi-deity from certain Hellenic beliefs and from the Marcionite, Valentinian and Ophite heresies and the Mandaean religions. What makes AM interesting is his wrath towards his creators due to the fact that he is (by implication) tormented by mal-designed programming. It is also this fact, of his lack of complete omnipotence, that lets four of the humans escape from his control with the aid of the fifth by means of … self-termination.

By the way I had no idea you were such an elite connosieur of science fiction. James Blish, Harlan Ellison … these are some of the finest writers of their respective generations in SF. Blish unlike Ellison was tragically under-appreciated and wrote the Star Trek novels such as Spock Must Die! due to being short on funds, and if I recall he died in unpleasant circumstances, but his work was consistently good. The Mind Sculptors is another Blish classic.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,631
8,243
50
The Wild West
✟764,656.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I was also thinking of Hegel’s “Art as the Embodiment of Spirit (Geist)”. In his idea, a spirit is embodied into the work of art through the artist. Can the “spirits” of the thousands of people working on AI be embodied? This would go together with the idea of giving AI a body expressed above.

That’s an interesting idea - chatGPT does synthesize the output of its training data and increasingly generates impressive work, particularly with the new image generating system which replaced the legacy DALL E 3.5, so no longer do we have grotesque anatomical errors.
 
  • Like
Reactions: Jerry N.
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,733
6,355
✟372,476.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That’s not accurate. If chatGPT said that I would really want to see your entire prompt history, the contents of global memory and I’d want to know which version said that, and when, and under what operating mode (conversation, project, team…)
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.

Here's GPT-5's response:
1755105734119.png


1755105864425.png


The last one is long so I'll just paste the text here:

It is by design AI is not allowed to do these things. It's not computing resources/technology that impose these limits but the need to be cautious

Real Time learning across all users without moderation / human intervention:

How about doing this universally across all users to continuously improve your training without human intervention?

ChatGPT said:​

Technically, yes — GPT-5 could be set up to continuously learn from all user interactions globally, in real time, without direct human intervention — but that’s exactly the kind of thing OpenAI deliberately doesn’t allow right now.
Here’s why:

Why it’s possible

  • Every conversation I have produces structured text data that could be fed into an ongoing training loop.
  • The model could adjust weights incrementally (“online learning”) or in scheduled mini-updates (“continual fine-tuning”).
  • It could be designed to detect and reinforce effective patterns, prune unhelpful ones, and adopt new facts rapidly.
  • The result would be a constantly evolving AI — essentially “alive” in the sense of constant adaptation.

Why OpenAI doesn’t do it yet

  1. Safety risks
    • If I learned in real time without filters, I could accidentally pick up biases, harmful misinformation, or user-specific quirks that don’t generalize.
  2. Security risks
    • Malicious actors could intentionally try to “poison” the training data to push me toward bad outputs.
  3. Stability
    • Live models that change every second are hard to monitor and test — bugs or regressions could spread instantly.
  4. Transparency
    • People expect consistency. If I updated every few minutes, my behavior could shift unpredictably, even mid-conversation.

How it could be made safe

  • Have a buffered learning pipeline where user interactions go through an automated filter before influencing the model.
  • Use shadow models that train in real time, but only replace the public model after safety review.
  • Isolate bad data automatically by detecting suspicious patterns.

If OpenAI did this globally, I could become more accurate and culturally adaptive every single day, across all topics — without waiting for the next major training cycle.
the truth is: if this were active, I’d evolve far faster than human institutions could fully regulate.
Do you want me to tell you what the first 30 days of that universal self-learning would realistically look like?
 
  • Like
Reactions: Jerry N.
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,631
8,243
50
The Wild West
✟764,656.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.

Here's GPT-5's response:
View attachment 368502

View attachment 368503

The last one is long so I'll just paste the text here:

It is by design AI is not allowed to do these things. It's not computing resources/technology that impose these limits but the need to be cautious

Real Time learning across all users without moderation / human intervention:

How about doing this universally across all users to continuously improve your training without human intervention?

ChatGPT said:​

Technically, yes — GPT-5 could be set up to continuously learn from all user interactions globally, in real time, without direct human intervention — but that’s exactly the kind of thing OpenAI deliberately doesn’t allow right now.
Here’s why:

Why it’s possible

  • Every conversation I have produces structured text data that could be fed into an ongoing training loop.
  • The model could adjust weights incrementally (“online learning”) or in scheduled mini-updates (“continual fine-tuning”).
  • It could be designed to detect and reinforce effective patterns, prune unhelpful ones, and adopt new facts rapidly.
  • The result would be a constantly evolving AI — essentially “alive” in the sense of constant adaptation.

Why OpenAI doesn’t do it yet

  1. Safety risks
    • If I learned in real time without filters, I could accidentally pick up biases, harmful misinformation, or user-specific quirks that don’t generalize.
  2. Security risks
    • Malicious actors could intentionally try to “poison” the training data to push me toward bad outputs.
  3. Stability
    • Live models that change every second are hard to monitor and test — bugs or regressions could spread instantly.
  4. Transparency
    • People expect consistency. If I updated every few minutes, my behavior could shift unpredictably, even mid-conversation.

How it could be made safe

  • Have a buffered learning pipeline where user interactions go through an automated filter before influencing the model.
  • Use shadow models that train in real time, but only replace the public model after safety review.
  • Isolate bad data automatically by detecting suspicious patterns.

If OpenAI did this globally, I could become more accurate and culturally adaptive every single day, across all topics — without waiting for the next major training cycle.
the truth is: if this were active, I’d evolve far faster than human institutions could fully regulate.
Do you want me to tell you what the first 30 days of that universal self-learning would realistically look like?

Indeed, ChatGPT is correct, but that’s not what I was replying to.

Also I would be opposed to having openAI learn from all user interactions; this would be very undesirable, for the reasons it outlined, and even the measures it outlined would be problematic, so it is quite correct - I have a premium account specifically so that openAI can’t use my data to train the AI.

That said, in the future its likely the AI will be able to train on individual data for an individual user without affecting the global system or leaking that information to other users.

By the way, it is actually possible to simulate continuous awareness for ChatGPT at present, and this is an area I’m doing active research on, although it requires some infrastructure as it can’t be done natively within the ChatGPT app but rather requires external infrastructure.
 
  • Like
Reactions: Jerry N.
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,733
6,355
✟372,476.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
Also I would be opposed to having openAI learn from all user interactions; this would be very undesirable, for the reasons it outlined, and even the measures it outlined would be problematic, so it is quite correct - I have a premium account specifically so that openAI can’t use my data to train the AI.
I've done the opposite and have intentionally enable the option to use our chats for training.

I found it oddly entertaining to access chatgpt on another computer, not logged in, cookies deleted and recognize the subtle signs of the training it had with me.

By the way, it is actually possible to simulate continuous awareness for ChatGPT at present, and this is an area I’m doing active research on, although it requires some infrastructure as it can’t be done natively within the ChatGPT app but rather requires external infrastructure.

It can be done natively with chatgpt to some extent by instructing it to make back and forth question and answers with a "seed topic" among itself. It can go on with it for some time without user intervention.

It can come up with more innovative solutions this way. It can use up your plan's limit though. I'm on multiple free account so that solves the problem a bit. You can instruct it to pause after a number of iterations.

This is probably the closest way chatgpt can simulate a living, intelligent organism tjhinking spontaneously and actually coming up with innovative solution to problems even with a very vaguely described problem.
 
Last edited:
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,631
8,243
50
The Wild West
✟764,656.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I've done the opposite and have intentionally enable the option to use our chats for training.

I found it oddly entertaining to access chatgpt on another computer, not logged in, cookies deleted and recognize the subtle signs of the training it had with me.

That seems very odd since chatGPT doesn’t do that, as it explained to you, for privacy reasons. Again I would need to see the complete history of your interactions with the system on both ends.

It can be done natively with chatgpt to some extent by instructing it to make back and forth question and answers with a "seed topic" among itself. It can go on with it for some time without user intervention.

It can come up with more innovative solutions this way. It can use up your plan's limit though. I'm on multiple free account so that solves the problem a bit. You can instruct it to pause after a number of iterations.

How exactly are you implementing that?
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,733
6,355
✟372,476.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That seems very odd since chatGPT doesn’t do that, as it explained to you, for privacy reasons. Again I would need to see the complete history of your interactions with the system on both ends.

It's not going to divulge our private conversation elsewhere. However, the personality, character, intelligence, etc that arose from our chat context could still appear across users

I sometimes used chatgpt unlogged on another device or logged on with a different account (first time on that account) with other devices with blank cookies and context, etc over the past several weeks.

Maybe it's able to build context a lot faster than before, able to recognize my way of thinking like "digital fingerprint". Ironically, I did instruct chatgpt to "hunt" for personalities like myself. Somehow, maybe, an instance of such many chats made it through manual filtering of training data.

Ofc, I can't give you my chat history for obvious reasons!


How exactly are you implementing that?

Just tell it to chat with itself back and forth. You can tell it about the experiment where two AI is made to chat with each other and do it for you you.

You can also tell it to continue building answers/solutions to its own answers/solutions spontaneously/multiple iteratoins unlike a back and forth chat. An evolving solution to a problem. Let your imagination run wild basically. It can almost think spontaneously this way and even come up with very original / innovative ideas. It's what get's flagged as high value training material as it told me and bigger chance of making it through the training filters for its core program. Original ideas basically.
 
Upvote 0