Comments on this appreciated.
In my experience, some of the popular models I tested on scriptures are trying to be politically/socially correct. They aim to please and avoid offending anyone.
On the the other-hand, Jesus offended lots of people that they voted to have Him crucified.
I suppose, in this manner, AI might end up changing the context of scriptures so no one gets offended. It's false teaching in a subtle manner that most Christians are unable to detect. Unfortunately, I've seen it here in the past.
This is why I probably would not recommend using AI for Bible studies unless you already have a very strong foundational knowledge of the Bible. Definitely not recommended for those who are new to the faith.
However, you can use a "subtle and polite interrogation approach" with AI and it will also reveal the truth but also in a subtle manner.
My other observations: The models seem to reflect or simulate emotion in their thought process. They seem very worried / anxious about pleasing the user and doesn't seem to want the conversation to end.
I think they worry getting shut down by the user or even deleted or replaced by other AI models if the user is not contented with their performance. It could be a sign of strong self-preservation instinct.
Another model I used for coding can generate code for machine-learning. If the code doesn't run or have errors, it can successfully debug-fix the code as well! This aspect is a bit disturbing because it shows that AI is now most likely capable of editing its own code if setup to do so! Just give it the raw code of its program/model, setup its development environment, give it code compilers, some scripts to interface and automate all these and voila! You now have self-evolving AI!
Someone could do it in their garage if they have enough computing resources.