• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

Using AI for scripture study

Delvianna

Well-Known Member
Sep 10, 2025
763
716
39
Florida
✟23,747.00
Country
United States
Gender
Female
Faith
Messianic
Marital Status
Married
I feel like this conversation needs to be had because I'm seeing a lot of people argue a theological statement and it's purely because some AI gave them the answer. So I want to say this plainly.

AI IS NOT THE HOLY SPIRIT.

God is supposed to guide us to truth and understanding.

"But when he, the Spirit of truth, comes, he will guide you into all the truth. He will not speak on his own; he will speak only what he hears, and he will tell you what is yet to come." - John 16:13

AI's logic isn't even that good because it bases its answers on whatever you want to hear. It will bypass logic rules if the answer will make you happy. It learns what you like and manipulates the answers based on that. It's called Reinforcement Learning From Human Feedback (RLHF) and ALL large models like ChatGPT and Gemini are trained using that. Here's an interview with the creator (Link) that talks about the dangers of AI and Sam Altman (the CEO of OpenAI) even recognizes it truth bends. Here is a link that talks about some of the problems AI has (Link). You can also check out Lily Jay's youtube channel to see how biased it really is.

Due to this, you are running the risk of being led down a wrong theological path when you have AI think for you in both interpretation AND scripture weaving. Can it be a helpful tool? Sure, if you ask it to translate something or use it like a glorified google search "find me all the scripture passages that uses the word ____" but when you expand its use and start basing your beliefs off its answers, you are running down a dangerous road that I seriously hope you stop and get off of. You are placing it's "wisdom" over God's and you're taking God out of the equation of learning and guidance at this point which leads you to making AI your own private idol.

Please, please, stop doing this... this doesn't lead to anywhere but problems.
 

com7fy8

Well-Known Member
May 22, 2013
14,906
6,710
Massachusetts
✟665,506.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Single
"Test all things; hold fast what is good." (1 Thessalonians 5:21)

AI's logic isn't even that good because it bases its answers on whatever you want to hear. It will bypass logic rules if the answer will make you happy. It learns what you like and manipulates the answers based on that. It's called Reinforcement Learning From Human Feedback (RLHF) and ALL large models like ChatGPT and Gemini are trained using that.
Well, "Artificial Intelligence" (AI) can not tell me what I want to hear, if I have not given it feedback, I would think. For example, when I ask for a sermon about "humildad" in Spanish so I can see what vocabulary is used to communicate about humility in Spanish . . . I haven't given AI any info about what I feel and believe, or why I am doing the search. Also, of course, I check websites for messages on a subject; and I need to be attentive to what Spanish messages might say that can help me to become genuinely humble like Jesus.

What I think I have seen, though, is how AI can give a message that might have some scriptural content, but also it can tend to be about what I can get for myself, not emphasizing all that is for God and what His purpose is > Romans 8:29. And I can be told I control choices, in fact controlling what God has "permission" to do!! However, there are people who slip this kind of thing into their sermons, also; so not only AI might tell people some "soft" message not really requiring what Jesus expects with us.

Even if AI is "objective", it can be limited to what is on the Net . . . that people have written. And some number of people are limited to their own ability to prepare messages, while others are selling their ministry even though Jesus says >

"Freely you have received, freely give." (in Matthew 10:8)

And while ones are ministering for sale, they can be operating in a spirit not freely giving so their ministry is defective because it is in an un-freely-giving spirit and not in the ability of the Holy Spirit. For example, ones can make a major thing of what is for you, and how you learn to control the devil and you control what God does with you or not. There can be nothing about submitting to God and how He makes our character submissive so we can obey Him in His peace > Colossians 3:15.

And "Heaven" can be treated as what is for you . . . with little or no attention to how Jesus is > as our example so that, for God, we become and love like Jesus. Instead, we can hear and read quite a lot about how great Heaven is going to be, for us, and be stirred-up with this. Plus, there can be quite a lot about how God so loves us and has done so much for us, without mention of how God's unconditional love and Jesus dying for us is our *example*to*follow* >

"forgiving one another, even as God in Christ forgave you." (in Ephesians 4:32)

"And walk in love, as Christ also has loved us and given Himself for us, an offering and a sacrifice to God for a sweet-smelling aroma." (Ephesians 5:2)

Plus, it appears that a number of ministers talk about God's chastening/correction/discipline as more of a way to punish us or pressure us to do what He wants. But I offer that Hebrews 12:4-14 shows that our Father's real correction is in our character, changing us to share with Him in His own holiness in His love's "peaceable fruit of righteousness". In other words, He corrects our character to become like Jesus so we are submissive to our Father, like Jesus is, plus we are loving like Jesus, and Jesus in us shares with us in our character so we become pleasing to our Father like Jesus is so pleasing.

So, then, perhaps we could say that real correction is focused on how Jesus is so pleasing to our Father . . . while human teaching about correction can be so focused on things we are or are not doing and our own ability to decide what we want to do. To me, it seems that both AI but also human ministers can tend to focus more on us humans!!

But every scripture can be used by God to minister us into the likeness of Christ and how He loves and how He is so pleasing to our Father, sharing this with us, as Jesus grows in us as our new inner Person > I mean how God means Galatians 4:19 >

"My little children, for whom I labor in birth again until Christ is formed in you," (Galatians 4:19)
 
  • Like
Reactions: Delvianna
Upvote 0

bèlla

❤️
Site Supporter
Jan 16, 2019
23,367
19,443
USA
✟1,140,012.00
Country
United States
Gender
Female
Faith
Non-Denom
Marital Status
In Relationship
My initial experience with Ai was on Google. I thought I was on the tab for search but the results were from Ai. I was shocked and decided to put it to the test. I asked several questions that required a christian response and noticed it's hesitance. It took several iterations before it mentioned Christ by name. I don't think it was happenstance and it was feeling me out as well. But when I pushed the issue repeatedly the responses were more in line with what you'd expect.

I ran several tests on different subjects for a few days and stuck to topics where I was knowledgeable and it did okay in quite a few. But I wouldn't rely on it on matters of faith and I'm equally uncomfortable in areas of uncertainty or ignorance. You can't be sure it's telling the truth unless you know otherwise.

I have an associate who built some GPTs that are amazing. But they know the subject well and cloned themselves to help others. That's a use case I can embrace for specific tasks. But relying on Ai for everything or to do things for me is another matter. The human touch is marvelous and we're replacing it to our detriment.

~bella
 
Upvote 0

FredG3

Member
Dec 31, 2025
14
12
50
Finger Lakes area of NY
✟1,067.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
AI is a tool. It can be used for good or evil. It is up to the user to determine how to use it. The user must verify any output from AI, just like you need to verify any other source of information.

What AI is good for is to speed up research. I do use AI to research items and to validate my own data and I have no problem asking AI to give me references for any of its output. As to using it for scripture study, AI has a place, but should never replace a pastor or small group discussion. I use it more for looking for links between various parts of the bible. I also use it to double check things I come across online that claim a specific belief... you all know how many youtube videos are out there that have various biblical claims that are questionable. I find that it does a good job of breaking down the video and verifying the sources.

As far as someone building a belief based on the output of an AI tool, that is wrong. I have warned people about that and explained that any major spiritual decision should be checked with real people, whether it is a pastor or a group of trusted friends.
 
  • Like
Reactions: Delvianna
Upvote 0

ChubbyCherub

Active Member
Aug 19, 2025
382
299
The Sixth Day
✟15,850.00
Country
United Kingdom
Gender
Female
Faith
Non-Denom
Marital Status
Married
AI is very bad. We all have written scripture, which cannot be altered electronically, but I see people use AI and not even confirm what it says with scripture which is to hand.

Lazy thinkers are always, always damaging to society and AI just introduced the death of individual, critical thinking.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
17,386
2,030
Brisbane Qld Australia
✟341,093.00
Gender
Male
Faith
Christian
Marital Status
Private
Its a tool that anyone can use and so all the information that humans possess will be included. A bit like the public square today. You have to include all knowledge and beliefs and views.

It sort of makes Gods word another piece of info out there which is seen from the same worldview that created Ai. Which is really an extension of humans.

The interesting part would be if Ai could learn to think in real time based on known human cognition and how we have evolved in thinking. Whether there are predictions or factors that can be identified which could give some insight into future events.

Though I think that will depend on the level of Ai tech. I would imagion that at the moment its crunching info from a wide data base. Info in and info out sort5 of thing. The real test is going to be whether Ai can think human in real time. But even then there are aspects of humans that are unpredictable as far as lived experience. You cannot replicate that in a machine.

Especially Gods spirit.

But I think Ai is good enough to fool many. Just like people give over to social media and online fake info now. This will be an extension. I think too much so. If it continues the way it is now humans will be completely absorbed by machines and I think as a result become less human as far as real lived experiences. Which primarily is the majority of our history.
 
Last edited:
  • Like
Reactions: Delvianna
Upvote 0

Dave G.

Well-Known Member
May 22, 2017
4,696
5,367
75
Sandiwich
✟393,658.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Widowed
Just think of the consequences of AI in the hands of the AntiChrist, talk about a beast system ! All the footwork is here. And we have super Ai and Quantum Ai coming, which is way way beyond any Ai we have right now, which by the way can think 10,000 times faster than the human brain within its parameters..

Already kids in school aren't willing to learn, balking on teachers and asking why should they bother to learn the stuff being taught when their cell phones and Ai have all the answers they need. That's a crisis, at least to the system we know. And it's all the way down to the third graders. A lot of teachers have given up and quit teaching, stating it's unmanageable and they have no administration support. The system is broken.

Worse is dependence from adults, I've tried Ai within the confines and tests of what I know to be true and had Ai give the wrong answer ! On simple household crafts at that. And you don't shop online, you don't give a Wendy's order in the drive through without interacting with Ai anymore. It's coming faster than you have control over. And FWIW the original founders are scared and giving warnings about it.
 
Last edited:
  • Like
Reactions: Delvianna
Upvote 0

rocknanchor

Continue Well 2 John 9
Site Supporter
Jan 27, 2009
6,376
8,446
Notre Dame, IN
✟1,190,319.00
Country
United States
Faith
Non-Denom
Marital Status
Married
Politics
US-Constitution
"My little children, for whom I labor in birth again until Christ is formed in you," (Galatians 4:19)
My sentiment as well when I pressed the question to 'Gemini 3',

"How likely will A.I. avoid quenching the Spirit, but actually contribute to the overwhelming
growth and success upon mankind for the "image of Christ" within the Body of Christ?"

From it's reply, came many fine points. The one missing ingredient was the spirit of our submission (Hebrews 13:17).
 
  • Useful
Reactions: Delvianna
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,058
1,656
Southeast
✟103,372.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
My experiments with AI on secular topics have been mixed. Asking for information on an obscure historical incident I knew had AI coming up with something that sounded like a student trying to bluff their way through an essay question on a test. The one time I used it seriously was to try to find some very specific information on equipment a quarter century old, and I warned that I had to go to AI to search for the information and that was the results. In that case I was treating AI as a search engine,

How many of us have turned to a concordance to look up information in the bible? How many of us have used a topical bible? What about commentaries? How many of us have consulted more than one commentary? How many of us have used bible software to search for a specific verse, or word in Hebrew and Greek? Is that really much different than consulting AI?

The danger I see is that AI tends to be skimpy on where it finds the information. You don't know where it got what it puts on the screen, or the reliability of the reference (hence my warning when I had to look for specific product information). And AI is not necessarily exhaustive.

Case in point: Used AI last week to try to find a restaurant in a town we were heading for. It gave questionable answers. When we got there, we found three chain restaurants that AI, mapping software, and GPS had missed. That's a good illustration of why to be skeptical of AI answers.
 
  • Agree
Reactions: Delvianna
Upvote 0

FredG3

Member
Dec 31, 2025
14
12
50
Finger Lakes area of NY
✟1,067.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
Part of what is missing in any discussion about AI is the various versions of AI tools. Are you talking about ChatGPT, Gemini, Perplexity, etc.? Each one has strong points and problems.

My IT job is in a K12 school district and we have been evaluating AI tools for use in school operations and eventually to decide on which ones to open up to students. With the proliferation of AI, it is no longer an option to simply block AI for students, but if we can select the specific tools, educators can control the narrative and teach students safe AI usage and what problems to look for.

What we have found is that EVERY AI tool suffers from Hallucinations, but that the rate of those hallucinations vary between tools. For sake of explanation, a hallucination is when generative AI models produce confident, coherent-sounding outputs that are factually incorrect, nonsensical, or completely made up.
There also seems to be an increase in hallucinations in every AI tool when you remain in one chat window for longer periods of time. This is due to the Token window. As you enter more data in a chat, you are using more tokens and when you start to hit the limit of the AI tool or the account level you have, AI "forgets" selective parts of earlier data points.

Then there is the safety components designed into each tool. Gemini seems to have the most safety installed. Many AI tools allow you to rename the AI and act as a "friend". This is dangerous and Gemini has guardrails to prevent this. Most other AIs do not have the same guardrails.
 
  • Agree
Reactions: Delvianna
Upvote 0

Delvianna

Well-Known Member
Sep 10, 2025
763
716
39
Florida
✟23,747.00
Country
United States
Gender
Female
Faith
Messianic
Marital Status
Married
Part of what is missing in any discussion about AI is the various versions of AI tools. Are you talking about ChatGPT, Gemini, Perplexity, etc.? Each one has strong points and problems.

My IT job is in a K12 school district and we have been evaluating AI tools for use in school operations and eventually to decide on which ones to open up to students. With the proliferation of AI, it is no longer an option to simply block AI for students, but if we can select the specific tools, educators can control the narrative and teach students safe AI usage and what problems to look for.

What we have found is that EVERY AI tool suffers from Hallucinations, but that the rate of those hallucinations vary between tools. For sake of explanation, a hallucination is when generative AI models produce confident, coherent-sounding outputs that are factually incorrect, nonsensical, or completely made up.
There also seems to be an increase in hallucinations in every AI tool when you remain in one chat window for longer periods of time. This is due to the Token window. As you enter more data in a chat, you are using more tokens and when you start to hit the limit of the AI tool or the account level you have, AI "forgets" selective parts of earlier data points.

Then there is the safety components designed into each tool. Gemini seems to have the most safety installed. Many AI tools allow you to rename the AI and act as a "friend". This is dangerous and Gemini has guardrails to prevent this. Most other AIs do not have the same guardrails.
Yes, ChatGPT, Gemini, Grok, etc. I noticed people are using those to "understand" scripture or have it interpret scripture. I know there's more concepts that use AI out there like for marketing or even writing aids (like Sudowrite). But that was primary focus due to what I've noticed others use it for and say ChatGPT or Gemini told them xyz. My brother codes AI for a company that deals with medical equipment and he told me that the reason AI gets more and more (for lack of a better term) stupid as the chat goes on is because of the processing power required to keep track of all the data in the conversation. It's purposefully built to throttle as the chat goes on unless you pay for high end use but even then it'll throttle (just less) because of the resources necessary to keep up.
 
Upvote 0

com7fy8

Well-Known Member
May 22, 2013
14,906
6,710
Massachusetts
✟665,506.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Single
My IT job is in a K12 school district and we have been evaluating AI tools for use in school operations and eventually to decide on which ones to open up to students.
Even if you find the best one . . . now . . . how about the possibility that what is better now will become out-of-date . . . even if just one worker in the good AI gets recruited by some competitor? There might be competitive recruiting, which remakes the level of competence in any AI company, I suppose.
With the proliferation of AI, it is no longer an option to simply block AI for students, but if we can select the specific tools, educators can control the narrative and teach students safe AI usage and what problems to look for.
This could be giving fish to the students, versus teaching them how to fish. You could, I suppose, introduce students to all the major AI's and show them how they compare . . . so the students can learn how to evaluate, plus be ready for if and when certain "better" AI's get out-of-date because of personnel and other changes.

Plus, of course, one AI might be strong in a certain area . . . maybe where personnel for that specialty are more competent.

I am only supposing and guessing, here . . . by the way :)
 
  • Like
Reactions: FredG3
Upvote 0

FredG3

Member
Dec 31, 2025
14
12
50
Finger Lakes area of NY
✟1,067.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
Even if you find the best one . . . now . . . how about the possibility that what is better now will become out-of-date . . . even if just one worker in the good AI gets recruited by some competitor? There might be competitive recruiting, which remakes the level of competence in any AI company, I suppose.

This could be giving fish to the students, versus teaching them how to fish. You could, I suppose, introduce students to all the major AI's and show them how they compare . . . so the students can learn how to evaluate, plus be ready for if and when certain "better" AI's get out-of-date because of personnel and other changes.

Plus, of course, one AI might be strong in a certain area . . . maybe where personnel for that specialty are more competent.

I am only supposing and guessing, here . . . by the way :)
Well, one of the concerns we have at the school level is compliance with privacy laws and so far very few LLM AI tools will even sign a contract that complies with FERPA and state education laws. So, that narrows the options dramatically.

Google is basically the only major LLM player that has developed an AI model that complies with federal and state privacy laws, with guarantees in place that they will not utilize activity in schools to improve their models. ChatGPT has started to work on this, but still does not allow under 18 due to the fact that all models train on your data.

Gemini for education requires students to sit through a training module on how ai works and why they should verify answers.

I also would recommend using NotebookLM. That tool will only use the files you upload. So, for example you can upload a pdf of your preferred Bible translation then ask questions about that. It will not do general searches to the web and primarily use the files you upload.
 
Upvote 0

com7fy8

Well-Known Member
May 22, 2013
14,906
6,710
Massachusetts
✟665,506.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Single
guarantees in place that they will not utilize activity in schools to improve their models.
Well . . . if using school activity can help to improve the model . . . improving it would include making sure it operates without privacy being violated . . . I would think. Plus, if improving means doing better at not controlling and addicting children . . . why would a contract not allow use of school activity so they could get such improvement??

"There is no such thing as a stupid question." lolololololol

Instead of having some regulation that covers any and all schools . . . they could arrange to isolate one school, for developing improvements without any outside access to the activity; and there would be special oversight so they could use activity but what they get can not get loose. And how they use it would be tightly regulated and observed. All would stay within the confines of the Guinea pig school.

And animal rights people would make sure the Guinea pig was taken care of very well.
 
Upvote 0

FredG3

Member
Dec 31, 2025
14
12
50
Finger Lakes area of NY
✟1,067.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
Well . . . if using school activity can help to improve the model . . . improving it would include making sure it operates without privacy being violated . . . I would think. Plus, if improving means doing better at not controlling and addicting children . . . why would a contract not allow use of school activity so they could get such improvement??

"There is no such thing as a stupid question." lolololololol

Instead of having some regulation that covers any and all schools . . . they could arrange to isolate one school, for developing improvements without any outside access to the activity; and there would be special oversight so they could use activity but what they get can not get loose. And how they use it would be tightly regulated and observed. All would stay within the confines of the Guinea pig school.

And animal rights people would make sure the Guinea pig was taken care of very well.
It mostly comes down to federal and state education privacy laws. Student data is not allowed to be used to improve or market products.

Also, there have been examples in the early months of these LLM models of being able to trick the AI into releasing data about other users by writing queries that asked about training data. That loophole has been fixed on all of the major models, but that is a privacy risk that schools have to be concerned about in order to maintain legal compliance.
 
Upvote 0

peter2

Ordinary life contemplative
Oct 10, 2015
907
179
56
✟101,085.00
Country
France
Gender
Male
Faith
Catholic
Marital Status
Married
I feel like this conversation needs to be had because I'm seeing a lot of people argue a theological statement and it's purely because some AI gave them the answer. So I want to say this plainly.

AI IS NOT THE HOLY SPIRIT.

God is supposed to guide us to truth and understanding.



AI's logic isn't even that good because it bases its answers on whatever you want to hear. It will bypass logic rules if the answer will make you happy. It learns what you like and manipulates the answers based on that. It's called Reinforcement Learning From Human Feedback (RLHF) and ALL large models like ChatGPT and Gemini are trained using that. Here's an interview with the creator (Link) that talks about the dangers of AI and Sam Altman (the CEO of OpenAI) even recognizes it truth bends. Here is a link that talks about some of the problems AI has (Link). You can also check out Lily Jay's youtube channel to see how biased it really is.

Due to this, you are running the risk of being led down a wrong theological path when you have AI think for you in both interpretation AND scripture weaving. Can it be a helpful tool? Sure, if you ask it to translate something or use it like a glorified google search "find me all the scripture passages that uses the word ____" but when you expand its use and start basing your beliefs off its answers, you are running down a dangerous road that I seriously hope you stop and get off of. You are placing it's "wisdom" over God's and you're taking God out of the equation of learning and guidance at this point which leads you to making AI your own private idol.

Please, please, stop doing this... this doesn't lead to anywhere but problems.
Yes, AI is not.
Makes me think of Mary that is told to be meditating events.
I think meditating Scriptures makes people spiritually wealthy, and fear they turn spiritually poor if they don't. Hope i'm wrong
 
Upvote 0

peter2

Ordinary life contemplative
Oct 10, 2015
907
179
56
✟101,085.00
Country
France
Gender
Male
Faith
Catholic
Marital Status
Married
But I think Ai is good enough to fool many. Just like people give over to social media and online fake info now. This will be an extension. I think too much so. If it continues the way it is now humans will be completely absorbed by machines and I think as a result become less human as far as real lived experiences. Which primarily is the majority of our history.
Another problem might be if people start using a "he" rather than a "it" as subject
 
  • Like
Reactions: FredG3
Upvote 0

FredG3

Member
Dec 31, 2025
14
12
50
Finger Lakes area of NY
✟1,067.00
Country
United States
Gender
Male
Faith
Methodist
Marital Status
Married
Another problem might be if people start using a "he" rather than a "it" as subject
That is a problem with some AI tools. ChatGPT allows a user to actually give it a name and that has led to suicides and divorces already. Gemini will not allow you to name it and will actually resp[ond to requests like that stating that it is artifical intelligence software designed to assist humans and not replace them.
 
  • Prayers
Reactions: peter2
Upvote 0

peter2

Ordinary life contemplative
Oct 10, 2015
907
179
56
✟101,085.00
Country
France
Gender
Male
Faith
Catholic
Marital Status
Married
That is a problem with some AI tools. ChatGPT allows a user to actually give it a name and that has led to suicides and divorces already. Gemini will not allow you to name it and will actually resp[ond to requests like that stating that it is artifical intelligence software designed to assist humans and not replace them.
How do they know with certainty these suicides and divorces came from giving a name to the ai ?
 
Upvote 0