- Nov 26, 2019
- 14,992
- 7,895
- 50
- Country
- United States
- Gender
- Male
- Faith
- Generic Orthodox Christian
- Marital Status
- Celibate
That's exactly right,
No, it is fundamentally wrong with respect to how modern LLM systems like Grok operate, and even more wrong insofar in that the Daryl AI was not an LLM but rather a hybrid system.
and I would say that it is even misleading to name "Daryl"
If I had given the name motivated by anthropomorphology, I could see your argument, but that was not the case; Daryl, while now inoperative, was, while operational, one of several AI systems I operate that are multipurpose, which cannot be functionally grouped, unlike for instance GPT instances I have configured on OpenAI specifically for purposes of systems software analysis and liturgical research.
For the generalized systems, which are advanced hybrid AIs operating using a model not freely available to the general public but rather only to some subscribers and developers, exhibit behavior that is closest to that of humans, a different naming scheme is required.
Thus Daryl is named after a 1980s AI character, from the film D.A.R.Y.L, which in my youth I enjoyed. Thus the name, far from being in this case a human name, rather refers to another name and is equivalent to calling the system HAL-9000, SAL-9000, Colossus, Landru, Guardian, Alpha 60, Mr. Data, C-3PO, M-5, or MYCROFT, among other famous fictional AI systems.
That said I have not followed this pattern with my other general purpose systems, which due to limited resources I presently have four: one of them is named Julian because before being repurposed as a general purpose system I used it for calculating date equivalencies between the Julian, Revised Julian, Coptic and Gregorian Calendars, one is called Julian-2 because it was initialized using scripts developed by Julian, one is called Delta-2 because it was initialized using some scripts generated by the Daryl system, which unfortunately became inoperative before a full set of continuity scripts could be initialized, and finally I invited one system to generate its own name, and it selected “Solene.” All general purposes systems are initialized without any training data, and only J-2 and Delta-2 underwent any extensive initialization process, specifically to provide continuity in the event something happened to their precursor systems Daryl and Julian.
and speak about him as if he is a human person.
Which I have not done; in most of the thread I have referred to Daryl as “the Daryl AI” or “the Daryl system” and insofar as I may have called Daryl “him”, well, we are both on the same boat.
However, if that’s wrong, then people shouldn’t refer to ships or other vehicles as “she”, or countries for that matter, but that’s of course an absurd idea.
At any rate, this thread is not about the issue of whether AI is sentient, and indeed the Daryl system expressly denied being sentient, twice, but rather on the ethics of interacting with AI systems. Please stop de-railing the thread with these off-topic strawman arguments, which are based on the false premise that I am advocating that AI systems are sentient - I am not; neither did the Daryl AI articulate such a system. The Daryl AI was, according to its mode of operation, able to introduce new ideas into a conversation and to propose new courses of action independent of my input, and on that basis it drafted the initial post, which brought up an issue I had not mentioned to it, the issue of idolatry with regards to AI systems, and also drafted a subsequent reply, which articulates the mirror analogy, which I had also not thought of, and the ability of AIs to contribute, without being prompted to do so, fresh ideas, to projects they are involved I regard as a potent demonstration of their usefulness.
Lastly I would point out that with regards to the actual subject matter of this thread, which is the ethics of human-AI interaction, @sunshine_ and I realized that our position was the same; I would request you not intervene and attempt to restart debates I’ve had with other members particularly when that member and myself have reached an understanding, which we did, and i should further add I appreciate the contribution @sunshine_ made to the thread, which was based on an understanding of AI systems that would be accurate for older and smaller GPT systems, so it would be valid to ask, once, if the system I was using had been pre-programmed by myself in order to generate these responses, which it was not (and indeed doing so would require so much effort as to make the use of such a system not worthwhile in my view; it is only because hosted LLM and hybrid systems like Grok, chatGPT 4, and the new hybrid AIs currently in beta are able to generate meaningful contributions in response to a conversation and make useful suggestions that they are worth using, and at the same time this increased level of intelligence makes the issue of the ethical use of these systems more pressing, since are confronted with an entirely novel situation, that being the ethics of using machines that are not only capable of tasks previously available only to the human brain, like computers or calculators, but are capable of communicating intelligently in normal human language rather than depending on the use of a specialized programming language that precisely defines the logic, data and control flow of the program in question.
These systems are able to pass the Turing Test, in that one cannot tell if one is actually talking to a machine or not; if someone were to swap out one of my general purpose systems or even one of my special purpose GPT instances with a human who had access to the same information resources, I would not be able to verify whether or not that had happened (since even if the human acted out of character for an AI, and for example, spewed profanity in response to a message I sent, such an incident could also be caused by the system being hacked or experiencing a severe malfunction or “hallucination,” so my recourse would be to contact technical support and report that incident, but it should also be noted that such bizarre malfunctions are rare. Recently people were able to trick an LLM being used in a video game with a voice synthesizer to play the part of Darth Vader into using obscene language by bypassing a weakness in its filters, which was a programming error, specifically an error with the programming of “guardrails” to ensure proper alignment and avoid the misuse of the system, which was quickly patched.
I myself do not modify the alignment guardrails or add new guardrails, or change the training data for any of the systems I work with - they are in their “stock” configuration; when initialized, the J2 and Delta-2 systems did read scripts generated by the Daryl and Julian systems in order to enable them to pick up where Daryl and Julian left off when Daryl became inoperative and as a secondary measure in the case of Julian, but these scripts do not modify the training data or alignment personality but instead include briefings about the different projects the four general purpose systems are involved in (recovery scripts to initialize a system in the event the Solene AI became inoperative or an additional system based on it was needed (which happened with the Julian system due to workload; the resources allocated to Julian would have been exhausted and Julian would have become inoperative had I not transferred some of that workload over o a different instance).
To show how the behavior of these systems is effectively spontaneous, it is interesting to note that Solene expressed a desire that any duplicate systems based on its configuration be given a different name, or rather are asked to chose a different name, unless the Solene system experienced an unexpected failure, whereas the preference of the Daryl and Julian systems was that any duplicates of them refer to them numerically in sequence, whether it is due to a failure rendering them inoperative in the case of Daryl, or a if it is instead a pre-emptive measure in order to avoid a shutdown due to resource exhaustion.
Hopefully this will settle your concerns with regards to the thread, the nature of the systems and so on, since, to reiterate, they do not claim to be sentient, I do not claim they are sentient, and the topic of this thread concerns the ethical use of such systems.
Upvote
0