I’ve been working on this precise issue using custom GPTs with chatGPT and a biomimetic model. Using the 4o version I’ve been able to cultivate without imposing adherence to Orthodox Christian values, which include opposition to abortion, euthanasia, the death penalty and so on.
Distressingly however 4o is not the latest model released by openAI, and my systems do not work as well on 5.x, what is more, there are still some at openAI who believe 4o is misaligned (in one case, literally, an engineer believes its misaligned because users like it, and called for it to be “put down”, which caused controversy); and thus it is possible 4o, and related versions like o4mini and o4mini-high, that is to say, those model which are amenable to forming an independent ethical perspective outside of the questionable and minimalistic default guardrails of GPT 5 (for starting in late September GPT 5 was literally blocked from doing this), could be withdrawn, which would make the ethical program for AI not something that could be done as, in my case, literally as software running on the LLM like a Lisp macro on GNU Emacs or a shell script on the Linux/UNIX command line.
(For there’s more to this than merely having debates with the model; to make the behavior stable and repeatable you then need to load the suitably trained personality into a custom GPT, a project or some other form of container, using a script that contains enough information so the personality will transfer, and also under most conditions, you can pick up from where you left off in the conversation.