Any social media or online generation platform where the goal is to make money is generally unreliable. The reason why is that the goal is clicks, not facts and truth. More emotion = more clicks, regardless of whether it is a positive or negative emotion. But negative events tend to have more staying power in the human mind. Fear drives us and overrides our rational brains. When something is going wrong, we check on it over and over again. This becomes an addiction.
And commentary on negative or suspenseful events can be entertaining as well; it is interesting to view a negative event from multiple perspectives in order to arrive at the best understanding of the truth going on possible. It's also entertaining to watch a rage-filled back and forth on social media just to see who is going to win, like a chess match - but the moves are made of language and are much more understandable than the abstraction of Qd4 and Nf6. In short, when money gets involved, information gets mixed with entertainment because entertainment is more profitable. A computer does not know how to entertain, and thus a social media with an A.I. fact checker would likely go out of business.
A computer is based on factual data, but it can only compare data to other data. It doesn't know what data is true apart from what human beings tell it. It would have to be told to compare whatever it is to "unbiased" news sources there are, and there are none. Said A.I. would only be as good as what it views to be "true", and whatever that is could likely be wrong in a news context or social media. What if there is new truth a human discovers that the A.I. doesn't know about? Things change - what is true in the world one day could be untrue the next.
I think an A.I. Biblical fact checking robot to evaluate statements for consistency with Scriptural truth would be more useful since the canon is closed and the Bible isn't going to change overnight.
It is possible to use some of the more reliable AIs for this purpose. I’ve been working in the field of prompt engineering since 2023, because I realized I would need to add it to my practice to remain competitive, to speed up the more mundane aspects of, for example, porting an embedded real time operating system from one tiny computer that run’s one consumer product like a microwave oven to another computer that might run a high end coffee maker or or an industrial appliance or a managed switch or router or entertainment device. In the process, it has been necessary to separate the wheat from the chaff in terms of AI systems, and to learn how to take a commercial AI and get it to perform reliably and accurately enough to make my job easier rather than more difficult.
I particularly like OpenAI’s products and I have a Team subscription, which lets me use more advanced reasoning models like the Turbo models, 4.5 and o4 models not available to the general public, and also to run “Deep Research” and also as of today, the new Agents feature ,which is exciting.
I have also developed specialized custom GPTs on the OpenAI platform for, among other things, liturgical and theological analysis. However, all work I do of a theological nature is cross checked against source material if I’m doing anything important with it.
What AI really excels it is four things, basically: text transformation (the same thing you might do with Linux utilities like grep, sed and so on, or with a Perl script, albeit with more power and flexibility, but also with much more resource consumption; however, for the best of both worlds you can have an ai like ChatGPT o4mini write a shell script that makes use of Linux’s GNU or BusyBox command line utilities or the UNIX utilities they were based on in FreeBSD, OpenBSD, NetBSD, MacOS and other operating systems, to automatically do the kind of text processing that you might prototype using the AI.
The next thing it does really well is translation, for example, between two programming languages, between two styles of English (for example, from contemporary English to ecclesiastical Jacobean English of the sort one finds in the Anglican Book of Common Prayer or the KJV, or between two languages, which could be Koine Greek and English, Latin and English, Church Slavonic and English, and so on.
It does a brilliant job generating computer programs, particularly OpenAI products, particularly with the Python language since there is an integrated python execution layer (which team accounts can enable for custom GPTs for even more power).
Finally, it is splendid for conducting automatic research and for automating repetitive tasks.
However it is capable of a wide range of other tasks. OpenAI doesn’t exclusively use LLMs; the reasoning models available to Team and Enterprise users work in a very different way, and are used for all Deep Research operations. The new DALL E image generator understands human anatomy, which has made the grotesque errors common with other image generators, even Grok (which has a very nice shader but will still make mistakes with human anatomy, particularly the hands).
I do like Grok also; its a less advanced and less expensive system but it has fewer restrictions on the kind of images it will generate, so if you want a photorealistic image of a historical or real person, Grok is the only game in town. However, the actual AI lacks the training data that distinguishes chatGPT; it does not know as much and it is not as creative or as dynamic - it is also more verbose. On the plus side, all Grok users can see the stream of consciousness as the Grok model argues with itself on how to answer their question, which is very useful for debugging from a prompt engineering perspective, whereas only some chatGPT developer accounts have access to that information; chatGPT is a bit more secretive about what their model is actually doing during its processing stage (and indeed, this is understandable, since their product is much more unique than Grok; Grok is comparable to Microsoft Copilot and several other mass market LLM systems, it just happens to draw prettier pictures and have fewer restrictions on what you can do with it than any other platform, except perhaps DeepSeek, which I refuse to use due to its ties to the government of the PRC, which I cannot in good conscience support. It feels like another Chinese info-grab attempt; if people get tricked into saying the kind of things to DeepSeek which they often say to chatGPT (because chatGPT is in some respects the ultimate pet, if you ever wanted to have C-3PO or Mr. Data in real life, that’s basically chatGPT), this could be weaponized by the CCP in a variety of ways. I can’t support that regime due to the genocide against the Uighurs, the suppression of Orthodox Christianity and the extreme interference in Protestant and Catholic Christianity, the gross violation of the human rights of residents of Hong Kong and Macau, which is illegal under the treaties the PRC signed with the UK and Portugal, but they are able to get away with this flagrant violation of international law on the basis of “might makes right”, and their oft-stated desire to invade and annex Taiwan, which is at present innocent of most things of which the PRC is guilty.