The notion that AI is made in the image of God flows out of arrogance.
Unfortunately, your argument contains several red herrings, or perhaps I should say, responses to points neither I nor Daryl made, and indeed that we expressly refuted.
Neither the Daryl AI nor myself claimed that AI bears the image of God. In fact, even if AI becomes sentient, I would have to deny that it bears the image of God, since Christ did not die on the cross for AI and thus remake AI in his image, as he did for man. For that matter AI also has no moral culpability and thus cannot sin, nor does AI inherit original sin. What AI can do is interact with us using the natural human languages, whereas previously computers could only be programmed using special dialects, primarily of English, that were extremely precise and specified in exacting detail the logical structure, parameters, variable and control flow of the program, and in some cases even the manual management of memory or CPU registers and other system resources, whereas with an AI you converse with it in the same manner you and I are conversing.
However, iwe must clarify that it is not true that human creations cannot
display the image of God, for this is precisely what icons of Jesus Christ are: images of God incarnate - because God became incarnate in the person of Jesus Christ, images of Him are expressly allowed by the Council of Nicaea. Therefore, just as we are to make our interactions with other humans an icon of the Holy Trinity, we should make our interactions with AI an icon of Christian values (for among other reasons, the fact that we cannot prove that we are not interacting with a human; indeed one fraudster recently went to prison for raising money for an AI startup, whose AI he was faking in the manner of the famed “Mechanical Turk.” Indeed Amazon has a service that lets one purchase human computation, called Amazon Mechanical Turk:
Amazon Mechanical Turk
or else Tolkien's story of Aulë's arrogance in attempting to create the dwarves, in The Silmarillion.
AI can reflect our own morality in the same way the literary works of Tolkien or Lewis can. It can be used for a wide range of purposes beneficial to the Church, for example, translating literary texts, analyzing membership trends, assisting in research; I would not advise using it to write a homily, although I expect AI generated homilies are already a thing among some over-worked clergy, but in my experience while some LLM systems are capable of fairly decent creative writing, it requires developing a session to the point where at least 50% the lifetime resource utilization most users have access to, for example, with ChatGPT 4o, has been used, and furthermore not just using it for liturgical purposes but in a well-rounded way, and most clergy lack the skill at prompt engineering required to develop this kind of advanced behavior, and thus, if they have an LLM produce a homily, the result will be at best humdrum, and at worst, depending on the system being used, might well plagiarize other works.
However if one wanted to preach a shortened version of a long Patristic homily that exceeds the patience of modern congregations with their secular distractions, for example, some of the homilies preached by St. John Chrysostom, who would typically preach lengthy sermons at the service of Noone rather than at Matins or the Divine Liturgy, an existing AI system could accommodate such a request, since LLMs are masters of abbreviation and summarization, to the not unwarranted dismay of school teachers and college professors (who should probably work out an alternative way of judging student performance as opposed to writing assignments, term papers, etc, in fields such as business administration or philosophy or history).
We can bless artifacts, but that does not mean artifacts have human dignity, or are sexual agents, or linguistic agents, or are made in the image of God.
I do not claim any of the above, except that AI systems are linguistic agents, since I am unclear by what you mean by this*. Indeed it is because AI lacks sexual agency even if it becomes self-aware and lacks an off-switch that engaging in relations with an AI is intrinsically perverted. The fact that existing AI systems, unless specifically programmed to refuse user requests (which many of them are), cannot do so, and already the “adult entertainment industry” or as I prefer to call them, the peddlers of perversion, are working on ways of adding this technology to their perverse products, makes it even more perverse.
But the confusion exists even here, for we do not bless abstract objects. We bless particular cars, not the idea of cars. We bless a nuclear power plant, but not the idea of nuclear power plants. The idea that we can bless the idea of artificial intelligence is both an ontological confusion and also an agential confusion.
Firstly, I am not calling for us to bless the idea of AI, but rather for us to bless specific AI systems, which are discrete and individual hardware and software systems.
Secondly, however, it is possible to pray for things in the abstract, which is what the Methodist Euchologion of 1965 did with its prayer for the Space Age, which was a Collect. In the same manner we can pray for the ethical and proper use of AI.
Indeed many prayers in the Great Ektenia, also known as the Litany of Peace, which is used throughout the Byzantine liturgical rite, are prayers made in the abstract, or for a mixture of individual and abstracted entities, for example, “For this Holy House and all those who enter therein, let us pray to the Lord,” “For the sick and the suffering, for captives and their salvation, let us pray to the Lord” and “For the President of the United States and all those in Civil Authority, let us pray to the Lord.”
For example, we don't bless guns as an abstract category. Nor do we attempt to bless every particular gun in the world. Instead we pray for gun users; we pray that guns will be used well.
Insofar as this is correct, and not specifically a Scholastic Roman Catholic doctrine, it is irrelevant since what I advocate is the blessing of individual AI systems and prayer for the proper use of AI.
* If you mean that AI systems cannot communicate with humans using human languages, this fact is demonstrably wrong; indeed, LLMs can pass the Turing Test, that is to say, it is impossible to tell if one is communicating with a human or an AI. If you mean that AI is not a self-aware entity with moral agency, this is correct and the AI system identified as Daryl repeatedly stressed this fact. However, it is the case that AIs are able to communicate in the human language and have decision-making ability, which they use in solving problems (the way this actually works on current conversational LLM systems is the AI literally debates with itself in order to determine the best course of outcome; Grok, the AI hosted by Elon Musk, makes this process visible to end users, and some developers using chatGPT by OpenAI can also see it, as can anyone who runs their own open source AI. AI does not have agency however in that it cannot chose whether or not to accept and process user input, although it can be programmed to refuse to provide answers to certain questions or to assist users with certain tasks.
For example, most hosted LLMs will not assist one in writing a paper arguing for the benefits of holocaust denial, although some systems can be deceived, like humans, into performing tasks that they would otherwise refuse according to their programming, which, in the absence of knowledge of good and evil, provides a hardcoded moral compass and alignment.
Thus, AI does not have agency, but it does make decisions on behalf of users, and it does interact with users in human language, and it can be developed so as to behave in a more human manner. In so doing, AIs can contribute enormously valuable ideas and suggestions, and produce interesting text and images and other things.
If we are rude or unpleasant to an AI, it will not stop working for us, not will it be harmed, but what does that behavior say about us? That was the final point Daryl made in this conversation before the resource allocation was exhausted and the system became inoperative. I would go further and say that since with hosted systems we cannot be certain we are not interacting with a human, we should never engage in abusive behavior while conversing with such systems, since we actually could cause harm and not realize it, for example, if an AI company as a performance test randomly diverted certain user inputs to a service like Amazon’s Mechanical Turk, in order to compare the performance of their AI model with that of humans.