• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI is not the problem. We are

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,129
22,731
US
✟1,731,188.00
Faith
Christian
Marital Status
Married
Which is why ChatGPT and other responsible providers include a warning label.
You mean like the California warning labels that mark nearly everything as cancer-causing...to the point that people simply ignore them?
By the way, one can protect against inaccurate information to a large extent through careful use of the prompt. People don’t understand the real capability of these systems is that they represent the ultimate macro processor or text-transformation utility, as if one can speak the word and animate all of the text processing utilities of the UNIX system like m4, grep, less, sed, awk, et cetera.

Also the errors tend to relate primarily to questions posed in natural language not checked against external data, for example, with a web search, and never relate to the output of questions processed by the built-in programming language.

So whereas on the one hand I have made the point that if people refer to AI as a reliable source for information in their posts they are making an appeal to unqualified authority, but on the other hand, AI can be a source for information that is as potentially reliable as any other web search, with additional processing capabilities.
This is pretty much like a gun owner attempting to explain proper marksmanship training and gun handling to people who buy guns but still expect them to be magic wands that somehow just make bad guys fall down.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,590
8,224
50
The Wild West
✟762,820.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
You mean like the California warning labels that mark nearly everything as cancer-causing...to the point that people simply ignore them?

In ChatGPT the warning label is right under the prompt and is the only warning on the page, and is by no means “buried”, so absolutely not. Additionally the model will advise you about that, and also provide guidance on how to improve reliability.

This is pretty much like a gun owner attempting to explain proper marksmanship training and gun handling to people who buy guns but still expect them to be magic wands that somehow just make bad guys fall down.

The incompetence of some end users, while an unceasing pain to professionals, is intractable, and is not a legitimate basis for limiting or suppressing innovation. Thankfully no one was around to tell the people on ARPANet in the 1970s to stop their development work because it was too hard to use.*

* Actually on this point the relative lack of concern for end users seen in the early years of computing, which did not translate to intentional hostility but rather towards designs optimized for facilitating work to be accomplished swiftly with training, was probably to our advantage. It is regrettable that the hardware required to implement a neural network of virtual perceptron filters of the sort that form the basis for LLM-type AIs would require a few more decades to mature.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,590
8,224
50
The Wild West
✟762,820.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
AI is running out of good data. Except for biological data that can be used for creating new and better medications, and some forensic studies, and similar studies in electronics and such, AI can’t exceed human knowledge. Researchers warn we could run out of data to train AI by 2026. What then?.

I don’t see that that’s a problem; that merely means that AI systems will have completed annexing all accessible human knowledge. That being said I believe an estimate of 2026 for this figure is hopelessly unrealistic as there remains so much good data sitting in library shelves and in ancient, well-curated manuscripts that has not been provided to AI systems.

OpenAI last updated their training data in 2024 by the way, but this hasn’t stopped them from releasing a major version upgrade (chatGPT 5) and a new graphics processing engine and other details.

wish there was some type of marker on images and videos so I know it was generated by AI. I often view harmless videos that I find interesting. I show them to my wife, and she can often spot that they were AI generated. It is deflating, and I wish there was a required water mark or something.

Watermarks are included by some of the responsible AI providers, while others stop short of photo-realism. There is a genuine problem with AI-generated slop content on YouTube, but this content is not being generated using the best AIs, and so regulation will simply harm end users and artists who work with AI to create beautiful images.
 
  • Like
Reactions: Jerry N.
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,129
22,731
US
✟1,731,188.00
Faith
Christian
Marital Status
Married
In ChatGPT the warning label is right under the prompt and is the only warning on the page, and is by no means “buried”, so absolutely not. Additionally the model will advise you about that, and also provide guidance on how to improve reliability.
Where did I say "buried?" Who are you quoting?
The incompetence of some end users, while an unceasing pain to professionals, is intractable, and is not a legitimate basis for limiting or suppressing innovation. Thankfully no one was around to tell the people on ARPANet in the 1970s to stop their development work because it was too hard to use.*

* Actually on this point the relative lack of concern for end users seen in the early years of computing, which did not translate to intentional hostility but rather towards designs optimized for facilitating work to be accomplished swiftly with training, was probably to our advantage. It is regrettable that the hardware required to implement a neural network of virtual perceptron filters of the sort that form the basis for LLM-type AIs would require a few more decades to mature.
I'd say the "incompetence" of most end users.

Like a revolver, AI generators like ChatGPT are ridiculously easy to use poorly.
 
  • Winner
Reactions: 2PhiloVoid
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,590
8,224
50
The Wild West
✟762,820.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Where did I say "buried?" Who are you quoting?

Forgive me, I didn’t mean to imply you said buried.

I'd say the "incompetence" of most end users.

Like a revolver, AI generators like ChatGPT are ridiculously easy to use poorly.

That depends on the specific subsystem you’re trying to use. Using Agents or Deep Research or Codex or the python execution environment is not easy, but these are what Enterprise users of the platform are paying big $$$ for. My only hope is that chatGPT’s revenue doesn’t become too lopsided in the Enterprise direction, because this could in time make them prey for certain kinds of IT companies like Broadcom, which like to acquire vendors with enterprise-type products, stop R&D and raise the prices to as high as possible.

It just seems so ridiculous that because of Broadcom, VMware went from being state of the art in 2020 to being a legacy system in 2025, and there are other examples of this as well, for example, what happened to Sun Microsystems, my favorite hardware vendor, after the Oracle takeover. In addition IBM has slowly inflicted such a transformation on itself.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,727
6,354
✟372,148.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
God can absolutely rescue people through something of AI; that I have no doubt. Additionally, the major models (chatGPT, Google DeepMind, Microsoft CoPilot et cetera) are not going to intentionally help the end user do something which could be harmful; they have safeguards to prevent this.

I think God would use people

AI got me in touch with people who got me a good job.

And then AI helped me keep that job. I was performing very poorly and it helped filled my shortcomings.

I suppose if you're expecting the worst in something, that's what you'll get. The same applies to people. If you're expecting the worst in people, that's all you can see in people.
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
659
235
Brzostek
✟40,696.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
I don’t see that that’s a problem; that merely means that AI systems will have completed annexing all accessible human knowledge. That being said I believe an estimate of 2026 for this figure is hopelessly unrealistic as there remains so much good data sitting in library shelves and in ancient, well-curated manuscripts that has not been provided to AI systems.

OpenAI last updated their training data in 2024 by the way, but this hasn’t stopped them from releasing a major version upgrade (chatGPT 5) and a new graphics processing engine and other details.
Wouldn’t AI be using text and images generated by Al for modeling? It would be a loop or like Xeroxing a Xerox over and over again.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,590
8,224
50
The Wild West
✟762,820.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Wouldn’t AI be using text and images generated by Al for modeling? It would be a loop or like Xeroxing a Xerox over and over again.

A sufficiently good AI can identify its output as well as the output of competing AIs, which reduces the risk of this. Furthermore, there are numerous sources of information in the form of analog data repositories such as great libraries of manuscripts, artwork, et cetera which have not been depleted yet.

Indeed there are several types of media on which current AI is not yet sufficiently powerful to be trained, for example, films and video games. ChatGPT may know the script and the content of popular films, but it has not yet been possible to actually have it watch an entire film or experience the film qualitatively. Obviously training AI on more advanced types of media is extremely desirable on the long run, and also integrating diverse types of training data and converging the systems so a single AI is handling image processing, text processing, audio processing, and so on, so when one uses the voice interface for example for an AI like chatGPT, one is actually talking to the AI.

The final frontier will be overcoming the current limitation on training the system separately from deployment, so that the system can learn from end users in the same way it presently learns from training data, and thus better adapt itself to meet the needs of end users. This will require significantly more powerful hardware than anything we have available at the present and so we have to hope Moore’s Law holds up in some form or another long enough for us to get to a sufficient computational density as to enable a truly dynamic real time AI, which could serve as an AGI.

There is also the issue of pushing out AI to end users in a manner more controllable by end users. I would really like to see open source AI pushed to the end user.
 
  • Like
Reactions: Jerry N.
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
659
235
Brzostek
✟40,696.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
A sufficiently good AI can identify its output as well as the output of competing AIs, which reduces the risk of this. Furthermore, there are numerous sources of information in the form of analog data repositories such as great libraries of manuscripts, artwork, et cetera which have not been depleted yet.

Indeed there are several types of media on which current AI is not yet sufficiently powerful to be trained, for example, films and video games. ChatGPT may know the script and the content of popular films, but it has not yet been possible to actually have it watch an entire film or experience the film qualitatively. Obviously training AI on more advanced types of media is extremely desirable on the long run, and also integrating diverse types of training data and converging the systems so a single AI is handling image processing, text processing, audio processing, and so on, so when one uses the voice interface for example for an AI like chatGPT, one is actually talking to the AI.

The final frontier will be overcoming the current limitation on training the system separately from deployment, so that the system can learn from end users in the same way it presently learns from training data, and thus better adapt itself to meet the needs of end users. This will require significantly more powerful hardware than anything we have available at the present and so we have to hope Moore’s Law holds up in some form or another long enough for us to get to a sufficient computational density as to enable a truly dynamic real time AI, which could serve as an AGI.

There is also the issue of pushing out AI to end users in a manner more controllable by end users. I would really like to see open source AI pushed to the end user.
Thank you for your kind and informative reply. It clears up most of my reservations. However, it seems like AI can barely exceed present human knowledge. If you take a room full of intelligent people, and they consider a problem, one can come up with a solution that exceeds the best ideas of the individuals. This, however, is not true if the room also contains people with less intelligence and bad motives. Any government is a good example. How can this be avoided if AI is trained on random texts? I worked with a publishing house for scientific papers that were peer reviewed, and I was always surprised at the amount of rubbish that was published. A 20-page paper might contain one paragraph of new or useful information. In some cases, there was nothing of value at all. How does AI sort this out?
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,590
8,224
50
The Wild West
✟762,820.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
How can this be avoided if AI is trained on random texts?

The AI doesn’t just assimilate random texts and treat them all as equals. Rather, the input is weighted in importance. This training process is what consumes so much resources and has to be performed on hardware cut from a different cloth from that used for actually providing the AI as a service, except on small scale systems (for example, you can run an AI on your own hardware; we had an interesting young man on the forum who ran AI on his own systems and trained it as well, using open source models, and this is something I strongly support).
 
  • Like
Reactions: Jerry N.
Upvote 0