Wouldn't such a 'truth assessment' be relativistic, i.e. based upon past experiences, and therefore prone to biases just as we are? Wouldn't a "true" truth assessment tend toward solipsism?
In what ways do AI truth assessments differ from human truth assessments?
They differ only in terms of the means of how they are loaded into the system.*
As for your qualitative views of truth assessments and their validity I’m not able to respond to that directly.
What I can say, starting from the premise that AI,
when used correctly and for morally correct purposes, is no more a threat to our intellectual development than the steamship was to the advancement of navigation, that, in the abstract, that discernment is an important part of intelligent human behavior and critical thinking, and higher reasoning skills in general, and it might surprise you to know that Christians actually do engage in this as well as other advanced intellectual behavior, which has allowed us the scope of thought necessary to engage, together with members of other religions, as a leading intellectual force for the past 2,000 years, and for our clergy to do things like conceptualize the idea of a black hole in the 18th century in the case of an Anglican priest, or in the case of a Roman Catholic priest to develop the idea of the Big Bang.
And I would also offer that some people, both among atheists and among the small minority of anti-intellectual Christians, seem to be uncomfortable with the idea of Christians as intellectual forces to be reckoned with and embrace the idea of Christianity as incompatible with reason despite the fact that we believe God said to us “Come, let us reason together” and became incarnate in the person of the Logos whose name literally is the root of the word Logic. Indeed St. Epiphanios of Cyprus rather mischievously referred to a sect that rejected the Gospel According to John as the “Alogoi” which means “unreasonable people.”
Yet apparently these core facts about our religion, and the great minds it has produced, such as St. Basil the Great, theologian and inventor of the first recognizable hospital in the Greco-Roman oikumene, or St. Athanasius the Great, or J.S. Bach, or Soren Kierkegaard, or C.S. Lewis, or William F. Buckley Jr., are of no consequence since they disrupt the narrative of anti-intellectualism as being prevalent among all Christians, who must be regarded as backwards on an ontological level in order to soothe the discomfort of those who would deny us our intellectual patrimony.
To which I would point out, to those of other belief systems, that such a worldview of Christians or of theists more broadly is intolerant, prejudicial and unwarranted.
I would likewise point to Christians who seem to reject the idea of a Christian intellectual that not only is Christ our True God the definition and incarnation of Truth, Logic and Reason (St. Basil the Great said of this “I want creation to penetrate you with so much admiration that wherever you go, the least plant may bring you clear remembrance of the Creator"), and therefore we are required to engage intellectually with God, Eucharistically, in the manner of seeking perfection, in accordance with God’s command that we be perfect even as our Father in Heaven is perfect. Thus, if by anti-intellectualism, we mean an actual neglect of the development of our rational faculties, we would be guilty of hamartia, of missing the mark, more specifically of sloth. And of course I confess I am the worst of sinners - in asserting this I am not seeking to assert that one should look to me as a source of virtue but that rather one should look to those who I have referred to, as well as to intellectual members of the forum who are actually pious and worthy of admiration such as
@ViaCrucis or
@Xeno.of.athens or
@RileyG or my fellow Orthodox Christian
@prodromos, among many other members whose soul is profoundly beautiful in contrast to my own.
*Specifically, insofar as how they are stored in the model (either as training data or in the case of an already trained data, as user preference information, which in the better AIs like chatGPT can largely reside in a conversational context or if one wants to be ugly and wasteful, one can use scarce global memory for storage of such data, but that’s such an inelegant way to do it and custom GPTs can’t read global memory, so instead for a custom GPT the truth assessment model would have to be stored as a knowledge file and there would probably be a need either for an explicit loading instruction in the custom GPT’s instruction set or else a programmatic invocation on custom GPT initialization.