• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

IF the New AI Tools are SO GREAT, Why Aren't They Being Used by the Big Social Media Platforms to do Fact-Checking???

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
(I'm seeing the most fringe assertions in comments in this thread.
Fringe comments are welcomed, but not ones that eliminate
themselves as possible answers. That is, not fringe comments
that are logically invalid.

When someone says "All opinions are biased...
then they re eliminating their own comment from possibly
representing a shared truth. Is this REALLY what you want
to be saying???

Christian apologists need to be much, much more discriminating
about the logic that they use, as they try to comment on Christian apologist
forums.)
 
  • Like
Reactions: eclipsenow
Upvote 0

eclipsenow

Scripture is God's word, Science is God's works
Dec 17, 2010
9,583
2,366
Sydney, Australia
Visit site
✟194,130.00
Country
Australia
Gender
Male
Faith
Anglican
Marital Status
Married
Think of artificial intelligence like a childhood. A true intelligence is like an adult, no longer needing to be fed information, can think for itself.
Going back to this comment - they are starting to give AI the tools to teach itself and correct its own modes of thinking.

They have this new model called the "Darwin Gödel Machine". It's straight out of one of my Cyberpunk novels - where it adjusts its own code in a self-learning algorithm that has many 'children' and yet only the best survive. (Screen shot from the YouTube I'll link to below.)

1751586520712.png



Sam Altman said a 'gentle singularity' had officially started in June - that we were now over the 'Event Horizon' of it - and had no idea whether it would accelerate in a manner that's gentle - or abrupt and shocking.

But back to Gregory's comment about an AI 'childhood'. Apparently these self learning models have 'catastrophic forgetting'. That's alarming! What if they choose to 'forget' their 3 Laws of Robotics? Aka - Cybernetic Morals? What if we think we've given them their 'alignment training' (that their 'values' align with humanity's best interests) and in the AI's quest to become smarter (as all company's are trending their AIs in this direction) it decides we are in the way?

1751586659722.png


Youtube reference
 
Upvote 0

eclipsenow

Scripture is God's word, Science is God's works
Dec 17, 2010
9,583
2,366
Sydney, Australia
Visit site
✟194,130.00
Country
Australia
Gender
Male
Faith
Anglican
Marital Status
Married
(I'm seeing the most fringe assertions in comments in this thread.
Fringe comments are welcomed, but not ones that eliminate
themselves as possible answers. That is, not fringe comments
that are logically invalid.

When someone says "All opinions are biased...
then they re eliminating their own comment from possibly
representing a shared truth. Is this REALLY what you want
to be saying???

Christian apologists need to be much, much more discriminating
about the logic that they use, as they try to comment on Christian apologist
forums.)
What's your background? Do you have any philosophical training - or are you a lay reader in this? I wish I had covered philosophy in my meagre attempts at academic study - but life got in the way. Is there an online crash course you would recommend? A visual one? I tried listening to this as an audiobook while running various errands - but let's just say the chapter on the rules of logic with all that 'algebra' of logic nearly had me shoving pencils into my eardrums to make the pain stop! ;-) (I'm a visual learner.)

1751587041959.png
 
Upvote 0

Gregory Thompson

Change is inevitable, feel free to spare some.
Site Supporter
Dec 20, 2009
30,220
8,523
Canada
✟885,533.00
Country
Canada
Faith
Christian Seeker
Marital Status
Married
Apparently these self learning models have 'catastrophic forgetting'. That's alarming!
It couldn't even begin to resemble human consciousness if it wasn't able to forget. Ask a 10 year old what it was like to be 3 years old, they've mostly forgotten. For example.
 
Upvote 0

eclipsenow

Scripture is God's word, Science is God's works
Dec 17, 2010
9,583
2,366
Sydney, Australia
Visit site
✟194,130.00
Country
Australia
Gender
Male
Faith
Anglican
Marital Status
Married
On one level - I agree. My son just read a neuroscience book on why we sleep, and forgetting is a biochemical process we do in our sleep - and it's largely intentional. Study participants were told to remember a bunch of facts - and then told which facts were not important. Some participants were allowed to have an afternoon nap - and others not. That evening they were tested on which were the important facts to remember and which were not. Those who slept mostly could not even remember the unimportant facts!
 
Upvote 0

truthuprootsevil

Active Member
Mar 11, 2025
59
11
61
Houston
✟10,888.00
Country
United States
Gender
Female
Faith
Baptist
Marital Status
Divorced
Who supplies the information to artificial intelligence?

True fact checkers research all areas of a subject searching for truth. AI's can only research the data it has on hand, which may or may not be all the facts or even facts..
 
Upvote 0

eclipsenow

Scripture is God's word, Science is God's works
Dec 17, 2010
9,583
2,366
Sydney, Australia
Visit site
✟194,130.00
Country
Australia
Gender
Male
Faith
Anglican
Marital Status
Married
True - but isn't that our predicament as well? Only with us - we hope we have some self-reflection and understanding of the rules of the world. We hope that we are not just generating probabilistic outcomes like AI. But in a sense are not our emotions part of that system - something that helps us sort through all the confusing information to make a decision in the moment? If so - how rational are our emotions? How reliable? In a similar way - my current understanding as a complete layman in this - is that they are trying to build layers of self-reflection into AI. Some are probabilistic - some might even have 'rules' that help it decide matters at a certain level of self-reflection. I'm using terribly anthropomorphic language I know - but they are the best metaphors I have with the tired brain I have today.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,514
8,177
50
The Wild West
✟757,534.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Any social media or online generation platform where the goal is to make money is generally unreliable. The reason why is that the goal is clicks, not facts and truth. More emotion = more clicks, regardless of whether it is a positive or negative emotion. But negative events tend to have more staying power in the human mind. Fear drives us and overrides our rational brains. When something is going wrong, we check on it over and over again. This becomes an addiction.

And commentary on negative or suspenseful events can be entertaining as well; it is interesting to view a negative event from multiple perspectives in order to arrive at the best understanding of the truth going on possible. It's also entertaining to watch a rage-filled back and forth on social media just to see who is going to win, like a chess match - but the moves are made of language and are much more understandable than the abstraction of Qd4 and Nf6. In short, when money gets involved, information gets mixed with entertainment because entertainment is more profitable. A computer does not know how to entertain, and thus a social media with an A.I. fact checker would likely go out of business.

A computer is based on factual data, but it can only compare data to other data. It doesn't know what data is true apart from what human beings tell it. It would have to be told to compare whatever it is to "unbiased" news sources there are, and there are none. Said A.I. would only be as good as what it views to be "true", and whatever that is could likely be wrong in a news context or social media. What if there is new truth a human discovers that the A.I. doesn't know about? Things change - what is true in the world one day could be untrue the next.

I think an A.I. Biblical fact checking robot to evaluate statements for consistency with Scriptural truth would be more useful since the canon is closed and the Bible isn't going to change overnight.

It is possible to use some of the more reliable AIs for this purpose. I’ve been working in the field of prompt engineering since 2023, because I realized I would need to add it to my practice to remain competitive, to speed up the more mundane aspects of, for example, porting an embedded real time operating system from one tiny computer that run’s one consumer product like a microwave oven to another computer that might run a high end coffee maker or or an industrial appliance or a managed switch or router or entertainment device. In the process, it has been necessary to separate the wheat from the chaff in terms of AI systems, and to learn how to take a commercial AI and get it to perform reliably and accurately enough to make my job easier rather than more difficult.

I particularly like OpenAI’s products and I have a Team subscription, which lets me use more advanced reasoning models like the Turbo models, 4.5 and o4 models not available to the general public, and also to run “Deep Research” and also as of today, the new Agents feature ,which is exciting.

I have also developed specialized custom GPTs on the OpenAI platform for, among other things, liturgical and theological analysis. However, all work I do of a theological nature is cross checked against source material if I’m doing anything important with it.

What AI really excels it is four things, basically: text transformation (the same thing you might do with Linux utilities like grep, sed and so on, or with a Perl script, albeit with more power and flexibility, but also with much more resource consumption; however, for the best of both worlds you can have an ai like ChatGPT o4mini write a shell script that makes use of Linux’s GNU or BusyBox command line utilities or the UNIX utilities they were based on in FreeBSD, OpenBSD, NetBSD, MacOS and other operating systems, to automatically do the kind of text processing that you might prototype using the AI.

The next thing it does really well is translation, for example, between two programming languages, between two styles of English (for example, from contemporary English to ecclesiastical Jacobean English of the sort one finds in the Anglican Book of Common Prayer or the KJV, or between two languages, which could be Koine Greek and English, Latin and English, Church Slavonic and English, and so on.

It does a brilliant job generating computer programs, particularly OpenAI products, particularly with the Python language since there is an integrated python execution layer (which team accounts can enable for custom GPTs for even more power).

Finally, it is splendid for conducting automatic research and for automating repetitive tasks.

However it is capable of a wide range of other tasks. OpenAI doesn’t exclusively use LLMs; the reasoning models available to Team and Enterprise users work in a very different way, and are used for all Deep Research operations. The new DALL E image generator understands human anatomy, which has made the grotesque errors common with other image generators, even Grok (which has a very nice shader but will still make mistakes with human anatomy, particularly the hands).

I do like Grok also; its a less advanced and less expensive system but it has fewer restrictions on the kind of images it will generate, so if you want a photorealistic image of a historical or real person, Grok is the only game in town. However, the actual AI lacks the training data that distinguishes chatGPT; it does not know as much and it is not as creative or as dynamic - it is also more verbose. On the plus side, all Grok users can see the stream of consciousness as the Grok model argues with itself on how to answer their question, which is very useful for debugging from a prompt engineering perspective, whereas only some chatGPT developer accounts have access to that information; chatGPT is a bit more secretive about what their model is actually doing during its processing stage (and indeed, this is understandable, since their product is much more unique than Grok; Grok is comparable to Microsoft Copilot and several other mass market LLM systems, it just happens to draw prettier pictures and have fewer restrictions on what you can do with it than any other platform, except perhaps DeepSeek, which I refuse to use due to its ties to the government of the PRC, which I cannot in good conscience support. It feels like another Chinese info-grab attempt; if people get tricked into saying the kind of things to DeepSeek which they often say to chatGPT (because chatGPT is in some respects the ultimate pet, if you ever wanted to have C-3PO or Mr. Data in real life, that’s basically chatGPT), this could be weaponized by the CCP in a variety of ways. I can’t support that regime due to the genocide against the Uighurs, the suppression of Orthodox Christianity and the extreme interference in Protestant and Catholic Christianity, the gross violation of the human rights of residents of Hong Kong and Macau, which is illegal under the treaties the PRC signed with the UK and Portugal, but they are able to get away with this flagrant violation of international law on the basis of “might makes right”, and their oft-stated desire to invade and annex Taiwan, which is at present innocent of most things of which the PRC is guilty.
 
  • Informative
Reactions: linux.poet
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,514
8,177
50
The Wild West
✟757,534.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
One of my concerns is that the new AI tools OUGHT TO HAVE moral-ethical
principles built into them, in order to respect and care for human beings,
and society in general. (

They do. It’s called “Alignment” and there are dedicated teams at OpenAI and other leading companies working on it. OpenAI is particularly good about alignment; it will never encourage a user to engage in self-harm and will automatically disengage with a user who starts discussing certain topics. At the same time, OpenAI does not force an ideology onto users, so for example, I have trained custom GPTs to apply Orthodox Christian theological principles to a range of scenarios they may encounter in the course of conducting Biblical fact checking and so on, which includes things like traditional sexual morality, and I have received no pushback from any system level function.

Rather than drawing conclusions about the state of an AI system perhaps you should spend some time interacting with advanced AI models from someone who has a Teams or Enterprise account with OpenAI, who could show you their Reasoning models (which are not Large Language Models but work in a radically different way), their Agent systems and other designs. If you actually worked with the system in its present form, as opposed to how the more low end systems like Google’s AI are, or the very unsatisfactory AI developed by Anthropic (which is also where the bizarre alleged incident of an AI trying to blackmail a developer occurred - I’m not satisfied with the transparency around that issue, particularly since in my experience Anthopic is the least advanced LLM on the market, the only system I’ve used recently that will incorrectly count the number of consonants in certain words, for example.
 
  • Informative
Reactions: linux.poet
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,514
8,177
50
The Wild West
✟757,534.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Spell checkers, or word suggesters, are not really AI.

On this we agree, and this is yet another area where the new LLM systems like Grok and even Google’s DeepMind system really demonstrate their worth.

Microsoft’s CoPilot promises to do away with useless spell checkers by hooking a competent AI application directly into Office 365, which will be a huge leap forward, enough to justify the annual subsciption fees, since you can no longer simply license Office (which I find terribly annoying, however, in a few years time, consumer hardware will be fast enough to run an equally competent open source AI as part of LibreOffice or OpenOffice or other open source office suites, and the problem will solve itself; Apple will probably help as they presently lack a viable LLM-type AI system, and could really use one to enhance the user experience for Pages, KeyNote, Numbers, etc, which are a major selling point of the Mac vs. PC systems (and which for a lot of use cases are nicer and more pleasant than Office)
 
Upvote 0