What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.
View attachment 331345
Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
Your approach, of listing proposed moral values, is mistaken. The technology can't handle this.
AI algorithms fall into 2 categories:
logical
sublogical
The logical algorithms deal with rules that human beings can understand, such as:
If an action can harm a human being ==> don't carry out the action
In science fiction, movie makers or writers (such as Asimov) assume that a machine can be taught what concepts such as "harming a human being" means. In reality, it is almost impossible to include this definition in computer programming. How many ways of harming a human being, must be explained in computer code? 10,000? 6,000,000? Logical rules such as this, are used by the "logical" algorithm approach. And the big software companies have not invested in trying to define what this sort of guideline means.
All the current AI approaches are "sublogical". The machine learning and neural net approaches, are sublogical. That is, they do not have identifiable logical rules, that they conform to. This is why their conclusions, are often NOT EXPLAINABLE to the person using the software. Machine learning approaches take huge amounts of "situations", that are manually "truthed" to be examples of certain characteristics. So information on "dogs" are often pictures of dogs. And pictures of other things are also given to a computer program along with dog pictures, but are marked as "FALSE", with the dog pictures marked as "TRUE". (This is a bit of a simplification, but makes the right point.)
There are many problems with machine learning. One is that the program learns any bias that the human picker of data, has. another is that simple characteristics can be learned, but much of life requires choices about events with MANY characteristics. And trying to train a neural net on data that has 50 different characteristics, is enormously difficult. Human beings learn how to automatically make choices in different situations, based on different lists of relevant characteristics. Machines can't do this.
Then, there is morality-ethics. You could train a neural net to recognize dogs in pictures. Even when a man is beating a dog with a stick. But the problem is not training it to recognize a dog, but training it to make moral-ethical decisions about whether the action of the man is right our wrong. The hard sciences don't even have the variables to express moral-ethical right or wrong. Morality-ethics is another higher layer of reasoning, that must be impressed over lower and simpler reasoning about what objects are.
How do you train a neural net to recognize abstract concepts. Such as ownership. Can you train it on pictures of a toothpick owned by me, and a toothpick owned by someone else, to get it to recognize that they have different owners? And yet ownership, is a core concept in a fair rule of law, and so, of the definition of justice. The ability to reason about abstract concepts, and apply this reasoning to the physical world, is VERY important. And the current simplistic machine learning algorithms cannot do this.
So many modern American have NO IDEA how AI algorithms work, or the types of algorithms there are. Or how the machine learning algorithms could be trained by criminal gangs to "recognize" good as evil, or evil as good.
Christians have got to stop being naive.
Building morality-ethics into AI, is a VERY difficult problem. And the big software developers are not willing to spend billions of dollars, in order to try to do this. So the current "AI" software is horrendously deficient, in contrast to the reasoning of a righteous man.