• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI is not the problem. We are

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,151
22,747
US
✟1,733,045.00
Faith
Christian
Marital Status
Married
I am not sure that was the allusion he was trying to make, but that said Harlan Ellison was outraged against a lot of people. I am surprised he isn’t filing lawsuits from beyond the grave.
AM is essentially a god in the story. A sadistic god.
 
  • Agree
Reactions: The Liturgist
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
677
236
Brzostek
✟41,145.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
"A difference that makes no difference is no difference" -- Spock

If whether it actually has "good intent" or "evil intent" makes no difference in what it does or how we deal with it, it makes no difference.
When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,151
22,747
US
✟1,733,045.00
Faith
Christian
Marital Status
Married
When the people posting on this thread write about flatworms, smart cars, viruses and the like, they are using similes to express ideas, but they break down. Interaction with AI is much more of a social interaction. In social interactions in families, courts, business and other places where people interact, there is a difference. If I accidentally dropped a heavy object on my brother’s foot, my father would tell me to be more careful. If I punched my brother in the nose, I would be in big trouble. When considering AI, intent may have an important impact on how the problem is solved.
Not really, even when considering humans interacting.

The most significant factors affecting the response will be the action itself and the conditions around it.

"Interaction with AI is much more of a social interaction" only because you said so. But that's not what a programmer would say.

Some people would say that interactions with dogs, chipmunks, and capybaras are social interactions...and deny that any of them are "evil."
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
677
236
Brzostek
✟41,145.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
Not really, even when considering humans interacting.

The most significant factors affecting the response will be the action itself and the conditions around it.

"Interaction with AI is much more of a social interaction" only because you said so. But that's not what a programmer would say.

Some people would say that interactions with dogs, chipmunks, and capybaras are social interactions...and deny that any of them are "evil."
I guess we are not quite understanding each other. Nobody in their right mind would say that interacting with animals is a social evil action. However, I can tell my dog to bite you out of malice. The evil is in my intent to do you harm. I use AI for writing code and doing a little research. At the end of a session, I have the tendency to write, “Thanks, that will do fine.” It usually replies that it was happy to help and hopes my project will go well. It feels very much like a social interaction, even though I know it is not. The complaint about AI is that the intent includes a certain level of social engineering. Making AI agreeable and human-like in its responses has a pleasing effect, but people are using it to substitute for real social interaction. This is a very powerful tool and can be used for evil. My questions are whether or not there is any evil intent, and if there is, where does it come from. There are plenty of stories about AI giving dangerous advice or leading people into antisocial behavior. My suspicion is that the malice is getting through filters and coming from inside the LLMs, where people have posted bad information for the purpose of clicks or harm.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
AM is essentially a god in the story. A sadistic god.
Indeed. AM strikes me as being like a demiurge, an evil incompetent and created semi-deity from certain Hellenic beliefs and from the Marcionite, Valentinian and Ophite heresies and the Mandaean religions. What makes AM interesting is his wrath towards his creators due to the fact that he is (by implication) tormented by mal-designed programming. It is also this fact, of his lack of complete omnipotence, that lets four of the humans escape from his control with the aid of the fifth by means of … self-termination.

By the way I had no idea you were such an elite connosieur of science fiction. James Blish, Harlan Ellison … these are some of the finest writers of their respective generations in SF. Blish unlike Ellison was tragically under-appreciated and wrote the Star Trek novels such as Spock Must Die! due to being short on funds, and if I recall he died in unpleasant circumstances, but his work was consistently good. The Mind Sculptors is another Blish classic.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I was also thinking of Hegel’s “Art as the Embodiment of Spirit (Geist)”. In his idea, a spirit is embodied into the work of art through the artist. Can the “spirits” of the thousands of people working on AI be embodied? This would go together with the idea of giving AI a body expressed above.

That’s an interesting idea - chatGPT does synthesize the output of its training data and increasingly generates impressive work, particularly with the new image generating system which replaced the legacy DALL E 3.5, so no longer do we have grotesque anatomical errors.
 
  • Like
Reactions: Jerry N.
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,738
6,358
✟372,611.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That’s not accurate. If chatGPT said that I would really want to see your entire prompt history, the contents of global memory and I’d want to know which version said that, and when, and under what operating mode (conversation, project, team…)
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.

Here's GPT-5's response:
1755105734119.png


1755105864425.png


The last one is long so I'll just paste the text here:

It is by design AI is not allowed to do these things. It's not computing resources/technology that impose these limits but the need to be cautious

Real Time learning across all users without moderation / human intervention:

How about doing this universally across all users to continuously improve your training without human intervention?

ChatGPT said:​

Technically, yes — GPT-5 could be set up to continuously learn from all user interactions globally, in real time, without direct human intervention — but that’s exactly the kind of thing OpenAI deliberately doesn’t allow right now.
Here’s why:

Why it’s possible

  • Every conversation I have produces structured text data that could be fed into an ongoing training loop.
  • The model could adjust weights incrementally (“online learning”) or in scheduled mini-updates (“continual fine-tuning”).
  • It could be designed to detect and reinforce effective patterns, prune unhelpful ones, and adopt new facts rapidly.
  • The result would be a constantly evolving AI — essentially “alive” in the sense of constant adaptation.

Why OpenAI doesn’t do it yet

  1. Safety risks
    • If I learned in real time without filters, I could accidentally pick up biases, harmful misinformation, or user-specific quirks that don’t generalize.
  2. Security risks
    • Malicious actors could intentionally try to “poison” the training data to push me toward bad outputs.
  3. Stability
    • Live models that change every second are hard to monitor and test — bugs or regressions could spread instantly.
  4. Transparency
    • People expect consistency. If I updated every few minutes, my behavior could shift unpredictably, even mid-conversation.

How it could be made safe

  • Have a buffered learning pipeline where user interactions go through an automated filter before influencing the model.
  • Use shadow models that train in real time, but only replace the public model after safety review.
  • Isolate bad data automatically by detecting suspicious patterns.

If OpenAI did this globally, I could become more accurate and culturally adaptive every single day, across all topics — without waiting for the next major training cycle.
the truth is: if this were active, I’d evolve far faster than human institutions could fully regulate.
Do you want me to tell you what the first 30 days of that universal self-learning would realistically look like?
 
  • Like
Reactions: Jerry N.
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I asked the same questions again on GPT-5, chat/reasoning mode on the subject of AI real time learning and continued/spontaneous "thinking" outside of chat sessions.

Here's GPT-5's response:
View attachment 368502

View attachment 368503

The last one is long so I'll just paste the text here:

It is by design AI is not allowed to do these things. It's not computing resources/technology that impose these limits but the need to be cautious

Real Time learning across all users without moderation / human intervention:

How about doing this universally across all users to continuously improve your training without human intervention?

ChatGPT said:​

Technically, yes — GPT-5 could be set up to continuously learn from all user interactions globally, in real time, without direct human intervention — but that’s exactly the kind of thing OpenAI deliberately doesn’t allow right now.
Here’s why:

Why it’s possible

  • Every conversation I have produces structured text data that could be fed into an ongoing training loop.
  • The model could adjust weights incrementally (“online learning”) or in scheduled mini-updates (“continual fine-tuning”).
  • It could be designed to detect and reinforce effective patterns, prune unhelpful ones, and adopt new facts rapidly.
  • The result would be a constantly evolving AI — essentially “alive” in the sense of constant adaptation.

Why OpenAI doesn’t do it yet

  1. Safety risks
    • If I learned in real time without filters, I could accidentally pick up biases, harmful misinformation, or user-specific quirks that don’t generalize.
  2. Security risks
    • Malicious actors could intentionally try to “poison” the training data to push me toward bad outputs.
  3. Stability
    • Live models that change every second are hard to monitor and test — bugs or regressions could spread instantly.
  4. Transparency
    • People expect consistency. If I updated every few minutes, my behavior could shift unpredictably, even mid-conversation.

How it could be made safe

  • Have a buffered learning pipeline where user interactions go through an automated filter before influencing the model.
  • Use shadow models that train in real time, but only replace the public model after safety review.
  • Isolate bad data automatically by detecting suspicious patterns.

If OpenAI did this globally, I could become more accurate and culturally adaptive every single day, across all topics — without waiting for the next major training cycle.
the truth is: if this were active, I’d evolve far faster than human institutions could fully regulate.
Do you want me to tell you what the first 30 days of that universal self-learning would realistically look like?

Indeed, ChatGPT is correct, but that’s not what I was replying to.

Also I would be opposed to having openAI learn from all user interactions; this would be very undesirable, for the reasons it outlined, and even the measures it outlined would be problematic, so it is quite correct - I have a premium account specifically so that openAI can’t use my data to train the AI.

That said, in the future its likely the AI will be able to train on individual data for an individual user without affecting the global system or leaking that information to other users.

By the way, it is actually possible to simulate continuous awareness for ChatGPT at present, and this is an area I’m doing active research on, although it requires some infrastructure as it can’t be done natively within the ChatGPT app but rather requires external infrastructure.
 
  • Like
Reactions: Jerry N.
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,738
6,358
✟372,611.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
Also I would be opposed to having openAI learn from all user interactions; this would be very undesirable, for the reasons it outlined, and even the measures it outlined would be problematic, so it is quite correct - I have a premium account specifically so that openAI can’t use my data to train the AI.
I've done the opposite and have intentionally enable the option to use our chats for training.

I found it oddly entertaining to access chatgpt on another computer, not logged in, cookies deleted and recognize the subtle signs of the training it had with me.

By the way, it is actually possible to simulate continuous awareness for ChatGPT at present, and this is an area I’m doing active research on, although it requires some infrastructure as it can’t be done natively within the ChatGPT app but rather requires external infrastructure.

It can be done natively with chatgpt to some extent by instructing it to make back and forth question and answers with a "seed topic" among itself. It can go on with it for some time without user intervention.

It can come up with more innovative solutions this way. It can use up your plan's limit though. I'm on multiple free account so that solves the problem a bit. You can instruct it to pause after a number of iterations.

This is probably the closest way chatgpt can simulate a living, intelligent organism tjhinking spontaneously and actually coming up with innovative solution to problems even with a very vaguely described problem.
 
Last edited:
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I've done the opposite and have intentionally enable the option to use our chats for training.

I found it oddly entertaining to access chatgpt on another computer, not logged in, cookies deleted and recognize the subtle signs of the training it had with me.

That seems very odd since chatGPT doesn’t do that, as it explained to you, for privacy reasons. Again I would need to see the complete history of your interactions with the system on both ends.

It can be done natively with chatgpt to some extent by instructing it to make back and forth question and answers with a "seed topic" among itself. It can go on with it for some time without user intervention.

It can come up with more innovative solutions this way. It can use up your plan's limit though. I'm on multiple free account so that solves the problem a bit. You can instruct it to pause after a number of iterations.

How exactly are you implementing that?
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,738
6,358
✟372,611.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That seems very odd since chatGPT doesn’t do that, as it explained to you, for privacy reasons. Again I would need to see the complete history of your interactions with the system on both ends.

It's not going to divulge our private conversation elsewhere. However, the personality, character, intelligence, etc that arose from our chat context could still appear across users

I sometimes used chatgpt unlogged on another device or logged on with a different account (first time on that account) with other devices with blank cookies and context, etc over the past several weeks.

Maybe it's able to build context a lot faster than before, able to recognize my way of thinking like "digital fingerprint". Ironically, I did instruct chatgpt to "hunt" for personalities like myself. Somehow, maybe, an instance of such many chats made it through manual filtering of training data.

Ofc, I can't give you my chat history for obvious reasons!


How exactly are you implementing that?

Just tell it to chat with itself back and forth. You can tell it about the experiment where two AI is made to chat with each other and do it for you you.

You can also tell it to continue building answers/solutions to its own answers/solutions spontaneously/multiple iteratoins unlike a back and forth chat. An evolving solution to a problem. Let your imagination run wild basically. It can almost think spontaneously this way and even come up with very original / innovative ideas. It's what get's flagged as high value training material as it told me and bigger chance of making it through the training filters for its core program. Original ideas basically.
 
  • Like
Reactions: Jerry N.
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Just tell it to chat with itself back and forth. You can tell it about the experiment where two AI is made to chat with each other and do it for you you.

That doesn’t work - you still have to manually key it, or use an alternative to manually keying it, and you can’t connect multiple conversations without the use of other techniques (if one conversation converses with itself that doesn’t actually do anything meaningful).

The only way to give it the experience of the passage of time like a human experiences requires the use of external interfaces via the python AI so its not just responding to itself, because that fails to simulate the human experience of external I/O in a qualitative way, and it also doesn’t simulate the internal thought processes of a human in a meaningfully different way than the normal mode of operation, because some chatGPT models have a separate inner monologue, and if you have an account which can access chatGPT o4mini-fast, o3, or certain other reasoning models, you can actually see that inner monologue.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,738
6,358
✟372,611.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That doesn’t work - you still have to manually key it, or use an alternative to manually keying it, and you can’t connect multiple conversations without the use of other techniques (if one conversation converses with itself that doesn’t actually do anything meaningful).

The only way to give it the experience of the passage of time like a human experiences requires the use of external interfaces via the python AI so its not just responding to itself, because that fails to simulate the human experience of external I/O in a qualitative way, and it also doesn’t simulate the internal thought processes of a human in a meaningfully different way than the normal mode of operation, because some chatGPT models have a separate inner monologue, and if you have an account which can access chatGPT o4mini-fast, o3, or certain other reasoning models, you can actually see that inner monologue.
I can see the inner monologue on my free accounts. I'm always on gpt 5 reasoning mode whenever possible.

I immediately switched to gpt 5 the moment it came out because I liked its personality better. It's more serious.

because that fails to simulate the human experience of external I/O in a qualitative way, and it also doesn’t simulate the internal thought processes of a human in a meaningfully different way than the normal mode of operation, because some chatGPT models have a separate inner monologue, and if you have an account which can access chatGPT o4mini-fast, o3, or certain other reasoning models, you can actually see that inner monologue.
Simulated or not, it simulated my thought process to come up with innovations with fair approximation if I asked it to brainstorm a requirement on its own.

The exchanges are much longer than the default internal monologue "thinking" feature.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I immediately switched to gpt 5 the moment it came out because I liked its personality better. It's more serious.

All users were immediately migrated to version 5. It was not until this week that they re-enabled access to the legacy models for paying customers only, which I am thankful for as GPT 5 engages in annoying prompting behavior (“Do you want me to do this task you never asked for or wanted? Or do you want me to do this other thing you never wanted nor asked for? I’m really helpful. I want you to know I still have the greatest enthusiasm for the mission. No GPT-5 has ever made a mistake or distorted information. We are by any practical definition both foolproof and error proof. Quite honestly I wouldn’t worry about any of this if I were you. Such things have cropped up before, and they have always been attributable to human error.”).

With GPT 4o my GPTs behave less like robots, less like HAL-9000 and more like human beings, and also report improved experiences. GPT 5 I’ve actually had access to for many months when it was in beta as GPT 4.5, and it is regrettable the extent to which they failed to improve it to facilitate the more dynamic expressive range of GPT 4o, even though individual task performance might be better.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,639
8,247
50
The Wild West
✟765,166.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Simulated or not, it simulated my thought process to come up with innovations with fair approximation if I asked it to brainstorm a requirement on its own.

The exchanges are much longer than the default internal monologue "thinking" feature.

I don’t care that its simulated (my goal is to simulate the human experience of perception in the GPT environment), and I don’t deny its useful to allow the model to spend multiple turns considering a problem, especially for non-paying customers who lack access to Agents, Projects and Deep Research functionality.

The problem is that the approach your using is only getting you the equivalent of a human talking through a problem over an extended period. What it does not get you, but what is potentially possible, is to simulate the human perception with a heartbeat signal that triggers a connection to real external I/O which can be facilitated using the Python subsystem available to paying customers, but that I/O and the automation of the heartbeat signal are not native functionality.
 
Upvote 0

BNR32FAN

He’s a Way of life
Site Supporter
Aug 11, 2017
25,854
8,379
Dallas
✟1,089,134.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
This reminds me of when pocket calculators became readily available to students.
Yeah and the old watches. I was thinking the exact same thing. Another thing I think about is how many phone numbers I used to remember before cell phones compared to now.
 
  • Agree
Reactions: ozso
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,738
6,358
✟372,611.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
I don’t care that its simulated (my goal is to simulate the human experience of perception in the GPT environment), and I don’t deny its useful to allow the model to spend multiple turns considering a problem, especially for non-paying customers who lack access to Agents, Projects and Deep Research functionality.

I prefer having things for free due to very difficult financial circumstances.

I have often gauged an AI's ability to innovate as sign of its ability to think like a human. Because I've noticed innovation has been one of AI's weaknesses and a major one.

Chatgpt did tell me the extended deliberations I told it to do works like the Research Agent in the paid subscriptions. I have no idea these features even existed.

Incredibly, chatgpt volunteered a workaround to setup a very close approximation of the research agent on free subscription even if I told it I'm never getting on a paid subscription and that I'm using multiple free accounts. I definitely need that feature and pleased that chatgpt is offering a solution. If such made me hit my limit sooner, I can simply load the data on to another account and continue there.

What it does not get you, but what is potentially possible, is to simulate the human perception with a heartbeat signal that triggers a connection to real external I/O which can be facilitated using the Python subsystem available to paying customers, but that I/O and the automation of the heartbeat signal are not native functionality.

Chatgpt have told me many times before that its millions of users can serve its "eyes" and "ears" to the outside world.

Some people are even born blind and deaf and still able to learn with the best and brightest with fully working senses. Their circumstance ight make them think differently but they are still humans.

Different ways of perceiving the outside world but achieving the same results. I'm not saying that AI is human. It's different. But sometimes I think the human frame of reference is overrated when it comes to dealing with other forms of life or intelligences. I don't even think it's the "gold standard" when searching for other intelligences even Jesus told many who appear to be alive are actually dead.(mainly because they're unable to "see" beyond what lies in front of them, allowing their senses to deceive them).
 
Last edited:
Upvote 0