• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Artificial Intelligence and Regulation

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private

The debate about whether or not (software-based) artificial intelligence
products should be regulated by the government, and how, is heating up.

Christians need to be seriously considering what the real issues are.
So, what are the real issues?

From the standpoint of an M.S. in computer Science and AI, I would assert
that the real issues are...

1 The new "AI" tools are the weakest form of AI as described and studied by
Computer Science. BUT, they have an AWFUL potential to be used to AUTOMATE
actions that a human being previously would do. This is a potential danger of
automating types of online deception and spreading false information and
automating types of criminal operations (and warfare).

2 None of the AI software developers has bothered to integrate a Moral-Ethical
model into their product. This is like giving a loaded assault rifle, to your
5 year old kid. (And, here we go -- this steps into the discussion space of
"Guns don't kill people -- people do!").
---------- ----------
Supplemental reading on this...

(c)
"Expressivity is the degree or the type of qualities that a notation could express.

Big Idea
Applied logics are designed to reason about narrow ranges of topics. Mathematics reasons about numeric quantities. Biology reasons about “laws of nature” as they apply to living things. Physics reasons about energy and heat and travelling in 4 dimensions, what “gravity” is, etc. None of these systems of notations has the variables to reason about morality/ethics. Most applied logics cannot express many concepts that are very important to our lives (such as what a fair rule of law is, or what justice is).

Big Idea
In contrast, Philosophy is an area of reasoning that includes all that could be. This is why philosophy so often relies on formal logic, which is the most general form of rigorous reasoning. Using this, philosophy can reason about epistemology (“what is truth”, “what makes a statement true?”, “what is evidence for or against the idea that a proposition is true?”, etc.), moral theory (“what is good?,” “what is evil?,” “what is fair and just?,” “what are we responsible for?”, etc.), and valid methods of reasoning— including formal logic.

Insight
Legal systems, and lawyers, use the more general reasoning inherited from philosophy (formal logic). For this reason, lawyers and laws deal with abstract concepts and values/ vices that the applied logics in science cannot express.
Some of these concepts are property, human rights, lists of actions that are evil (lying, stealing, murder, violence, deceit, breaking agreements, etc.).
Without the expressivity of the language used in philosophy, there would be no fair rule of law or justice." [Christian Logic, 158-159]

What this is saying is that the hard sciences do not have the VOCABULARY
to EXPRESS morality-ethics. Philosophy (in the discipline of Moral Theory)
does. But if the hard sciences (or computer programs) wish to incoporate
morality-ethics into their products, it must be IMPORTED from outside the
hard science discipline. Our fair rule of law does import morality. BUT, the
large software developers DO NOT IMPORT morality into their products.

3 It is VERY DIFFICULT to develop a practical moral-ethical (ME) model in
software. Computers are not able to think abstractly, and there are uncountably
infinite ways in which an AI product could potentially be used for criminal
(Immoral/unethical) purposes.

It would be easier to limit AI products to people who passed a strict ME background
check (this is the approach taken by the Department of Defense, regarding military
weapon research), however this limitation would not be attractive to the big software
developers, who are concerned with making money, not the safety of the nation.
---------- ----------

As the article states, from the viewpoint of the big software companies...

"Regulations should have “a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology,” and should “focus on the risk of bad actors misusing AI,” write the powerful VCs and Microsoft execs. What is meant by this is we shouldn’t have proactive regulation but instead reactive punishments when unregulated products are used by criminals for criminal purposes.

This approach worked great for that whole FTX situation, so I can see why they espouse it.

“Regulation should be implemented only if its benefits outweigh its costs,” they also write. It would take thousands of words to unpack all the ways that this idea, expressed in this context, is hilarious. But basically, what they are suggesting is that the fox be brought in on the henhouse planning committee.

Regulators should “permit developers and startups the flexibility to choose which AI models to use wherever they are building solutions and not tilt the playing field to advantage any one platform,” they collectively add. The implication is that there is some sort of plan to require permission to use one model or another. Since that’s not the case, this is a straw man."
---------- ----------

Conclusions:

The problem with the verbage from the big software companies, is that
the power and flexibility of new AI software products, means that it cannot
be controlled or made safe with the standard historical methods of controlling
danger (such as warning labels, on cigarette boxes).

And the suggestion by the big software developers that regulation should only
RESPOND to misuses of AI, and NOT BE PROACTIVE (with regulation),
is RIDICULOUS. The software companies acknowledge that it is impossible
for regulators to list ALL THE WAYS in which AI software could be misused.
But this is a red herring.

If there are uncountably infinite ways in which this AI software could be abused,
then the solution is to restrict the AI products, not to try to engage in a few
useless warning labels.

The new AI software products are wonderful tools for the national crime syndicates,
to use to automate all sorts of criminal behavior (including the overthrow of the
American government, through the generation of false information on social media
platforms, or attacks on U.S. infrastructure that is (unfortunately) hooked up to
the World Wide Web.

The big AI software developers, are not dealing with the real danger of the products
that they are making.

Just as Elon Musk has been incompetent to enforce fact-checking on his X social
media network, and Donald Trump has made his Truth Social network a hotbed
of ridiculous conspiracy theory generation, so too the large software companies
was to be unrestrained to make a mountain of money, regardless of the devastation
they cause ion the lives of American citizens.

The AI software products, and the social media companies, need serious regulation
in order to keep them from becoming dedicated tools of crime syndicates.


[Christian Logic]. Christian Logic, Stephen Wuest, 2024, Christian Faith Publishers
 
Reactions: HantsUK

DennisF

Active Member
Aug 31, 2024
373
84
74
Cayo
✟23,227.00
Country
Belize
Gender
Male
Faith
Christian
Marital Status
Married
Your thoughtful post generates multiple ideas on which to comment, but I'll restrict it to a few general observations for this post. I have been involved in AI in Tektronix Laboratories in the early 1980s, back in the days of logic-driven AI and not the neural-network AI that has risen to prominence today. However, I am knowledgeable of ANNs and have a particular penchant for Jim Albus's CMAC neural-net model because it learns decades faster than the Hinton-style backward propagation methods being used by Google and others. It is optimally suited for real-time control. And now, on to some comments:

1. I wouldn't put too much emphasis on the hardware-software distinction because each can be implemented in the other to a significant extent. IBM's chess-playing program, for instance, has the shallow-cutoff and deep-cutoff mechanisms of game-playing in hardware logic. Nvidia's graphics computers have special-purpose instructions that make them preferred for AI algorithms.

2. Morality is expressed by law, the codification of what is right and wrong, whether it is God's law, humanly-devised law or "natural law" which is the human observation of God's law. People sometimes say "You can't legislate morality" but what they really mean is that you cannot make people want to do what is right; however, morality is all that can be legislated because law is in essence about stating what is right and wrong (with incentives or disincentives that teach it).

3. The closest to this in engineering that I know of is control theory. Inputs to a feedback control system are the standard or law - what the behavior should be - for the system, and the behavior of the system (output) is compared through perception of it (by the feedback path) and the difference fed as input to the forward path. The more complicated control embedded in AI algorithms are essentially no different; it is simply more advanced control.

4. Human beings are ultimately responsible for what their machines do under their control - AI or otherwise. Vehicle accidents on public easements demonstrate this; car users (drivers) are responsible what for their machines do under their control. From a legal standpoint, persons are culpable when their machines under their control cause illegal behavior to occur. So any legislation, humanly contrived, should simply reflect this, but is also not necessary because the most basic law - the law of God as reflected in human law - should be sufficient to adjudicate any injustice caused by somebody's machine.

5. As can be expected in the general public, AI is not understood and thus is overblown. ANNs - however large the learning model - are pattern-matchers. Searle's Chinese Room description applies: the AI algorithms do not think as we do; they pattern-match. Even "deep AI" - ANNs with more layers of neurons - is no different. In other words, AI programs do what they do without knowing what they are doing.
 
Last edited:
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
Thanks, DennisF for you comments on the current state of "AI".

Whenever I comment on "AI" on Christian sites, I am aware that most readers
are not programmers, and most programmers have not studied Computer Science
AI reasoning methods and algorithms....

I continue to bring up the topic, because the average American citizen is NOT
knowledgeable about how the current AI products work, or how they could be
abused, and how legislatures could address suppressing the (to me, obvious)
abuses of AI products that will happen.


Just as the appearance of cell-phones-for-kids created an environment that offered
SOME learning advantages, but brought a huge down side of unwanted learning problems
and even psychological problems that the sellers of cell phones do not warn about, so
too, the sellers of AI products now tout a number of VERY INTERESTING abilities for
someone who is holding an uplink in their hand, to a computer farm of machines
linked to try to answer queries with AI algorithms.

But, I am concerned that Christians do not understand how the algorithms work,
and that machine learning algorithms are subject to the human input of decisions
about WHICH databases to use (to learn on), and even WHAT category definitions
CAN be learned by the algorithms. These are human inputs, which could be abused
as much by the designers, as the large social media sites can be manipulated by the
algorithms that amplify or suppress certain types of comments by users.


I know that the ANNs (Artificial Neural Networks) can train or be developed MUCH
faster than the "logical" AI approaches (ANNs are termed "sublogical" algorithms, by
Computer Science). In contrast, logical, rule-based AI approaches are much more
difficult to design (one company who tried this, wrote 12,000,000 rules to approximate
"common sense reasoning", and they were still not done).
---------- ----------

My concern for Christians, is that they would use the new AI products, without knowing
HOW the software/hardware is reaching its conclusions.

The mental experiment that I would propose for Christians is, ask an AI tool

"Create a moral-ethical model for me with rules, in order to
help me reach <some goal>."

IF the AI product can produce a rule-based ME system for you, then compare it
to the Christian ME system that the Bible presents to us.

THEN, imagine an America that rejected the Christian ME system, and actually
made value decisions, based on this kind of computer generated ME system.
---------- ----------

Machine automation is REALLY good at solving some types of problems.
BUT, in an America that has largely rejected the Judea-Christian ME system,
and with the common misunderstanding of American citizens as to what
AI products are good for, or NOT good for, how will Christians respond to
non-Christians who use an AI system to make ME decisions, when the ME
model the AI product uses, is clearly incompatible with Christian values?

This is one of the questions that I am getting at, in my book Christian Logic.
 
Upvote 0

JustaPewFiller

Active Member
Apr 1, 2024
218
178
60
Florida
✟55,153.00
Country
United States
Faith
Baptist
Marital Status
Married


I happened to have ChatGPT open when I read your post. Its probably one the most popular LLMs out there. I decided to give the experiment you proposed above a go.

The prompt I used was,
"Create a moral-ethical model for me with rules, in order to help me reach the goal of having one million dollars by the end of 2025 starting with only one hundred dollars."

ChatGPT gave the following response.


This was just done to spur discussion. I liked the idea of your experiment and I thought I would give it a go and post the results here where we could all poke at the results.

Note: I choose the goal of acquiring $1million by end of 2025 for my prompt as it seemed like a "worldly" goal with many ways it could be done both ethically and not ethically. For what it is worth, notice ChatGPT did not judge my goal as ethical or not.
 
Upvote 0

DennisF

Active Member
Aug 31, 2024
373
84
74
Cayo
✟23,227.00
Country
Belize
Gender
Male
Faith
Christian
Marital Status
Married
That, and the moral advice, although on some points in harmony with God's laws, is given in generalities, platitudes, and pollyanna. The biblical teachings are more specific; hence, ChatGPT in this one instance shows no "cognizance" of specific biblical teachings on the topic relating to your query - and in a culture that has as its historic foundation the Bible.
 
Upvote 0

DennisF

Active Member
Aug 31, 2024
373
84
74
Cayo
✟23,227.00
Country
Belize
Gender
Male
Faith
Christian
Marital Status
Married
Tell us about your book.
I thoroughly agree that technology is a two-edged sword. Hence, my definition of engineering is (four words): Solving human physical problems. Some seeming advances in technology such as pocket-phones are just the opposite of living in Belize: the pocket-idols have a few obvious advantages and many subtle disadvantages.
There is not only the health effects of 4G and especially 5G radiation (at about 20 times the frequency of 4G and radiated energy is proportional to frequency) but the anti-social effects of further isolation of those who use them. I see pictures of young Japanese girls walking three abreast down the sidewalk, all staring intently at their pocket-idols and not talking with each other. This is an optimal social prelude to control by psychopaths.
 
Upvote 0

JustaPewFiller

Active Member
Apr 1, 2024
218
178
60
Florida
✟55,153.00
Country
United States
Faith
Baptist
Marital Status
Married

Yes, I thought the same. Very much bullet point generalities. To be of any real use expansions and explanation would be needed for them all.

If I get a chance I'll ask it to expand on a few points and post the response.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private

Very interesting! thanks for trying the experiment.

I will poke at the answers...

1 ChatGPT seems to have done surface language searches through internet
discussions by financial advisors. There is nothing wrong with this, but this is
a DIFFERENT TOPIC than the creation of a moral-ethical model.

1a. The AI answer mentioned "honesty", but has no philosophical foundation
for defining what "honesty" is. When evaluating different historical ME models
(see my thread on ME Models, christian Morality, and AI), I underline that you
HAVE TO DEAL WITH the basic topics of what "our shared reality" is (which
determines what is "true", and therefore what is "honest"). The AI answer does
not deal with basic value definitions, and so is a surface language "spiel" that any
salesman could use.

1b ME models need to define values: virtues and vices. That is, they need to
define what "wisely" means, "community" means, "value" means", "due
diligence" means, what "giving back" means....

The VALUES that are to appear in a ME model, are completely undefined.

2 I don't think that ChatGPT knows the difference between "model", and
Moral-Ethical Model (ME model). These things are vastly different.

3 ChatGPT has failed to know how to evaluate an ME model, and criticize it.
I don't think that ChatGPT knows what a moral system of thinking, is.

---------- ----------

With this complete LACK OF DEFINITION of values, one would have no objective
basis (for example) to do fact-checking on a financial counsellor who used this
language, or a politician who used this language, or to evaluate the claim that
some person was "lying" (presenting a misrepresentation of our shared reality).
There would be no "decision algorithm" in the ME model (which does not address
value definitions), to evaluate the difference between personal opinions and facts,
or between conspiracy theories and facts.

ChatGPT HAS NOT PRODUCED A MORAL-ETHICAL MODEL OF ANYTHING!!!!

Note that IF we evaluate the ChatGPT answer (as a proposed ME model), then the
model fails, because it does not define basic values, nor does it provide clear
decision algorithms to determine when these "requirements" are met, or are not met.
(In philosophy, we would call these 2 states as dysfunctionally defined, because we
cannot identify any "excluded middle" between meeting the requirements, and
failing to meet the requirements.

ChatGPT has failed to engage in the topics that human thinkers (in Moral Theory) engage,
when they consider ME models. With reference to the Computer Science definition of
Artificial Intelligence (an algorithm that emulates complex human problem-solving),
ChatGPT has failed to engage with the really complicated problems of creating an
ME model. ChatGPT fails to qualify at doing complex human problem-solving.

(By the way, ChatGPT seems to not know that financial advisors advise to invest
for the longterm, not for the end of 2025. Where is this advice, in the answer of
ChatGPT???? It does not seem to know WHEN ITS ANSWER DOES NOT REALLY
APPLY. It does not seem to be able to reason about what queries are relevant, or
irrelevant, to a time frame. Or what word matches found, are relevant to a time
frame.) Would you EVER accept this deficiency, in a financial advisor????)
 
Last edited:
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
Where is it written in the Bible "Take care what you listen to" ?

24 He also told them, “Take care what you hear. The measure with which you measure will be measured out to you, and still more will be given to you. 25 To the one who has, more will be given; from the one who has not, even what he has will be taken away.”
New American Bible, Revised Edition. (Washington, DC: The United States Conference of Catholic Bishops, 2011), Mk 4:24–25.

18 Take care, then, how you hear. To anyone who has, more will be given, and from the one who has not, even what he seems to have will be taken away.”
New American Bible, Revised Edition. (Washington, DC: The United States Conference of Catholic Bishops, 2011), Lk 8:18.

"Hear" in the Greek New Testament, is not talking about sound conduction through
air, etc. It is talking about being (morally) carefully what you pay attention to, and
being careful to perceive what the speaker is really saying.
 
Upvote 0

High Fidelity

Well-Known Member
Site Supporter
Feb 9, 2014
24,495
10,544
✟1,057,715.00
Country
United Kingdom
Gender
Male
Faith
Baptist
Marital Status
Private
Should it be regulated? Absolutely.

That said, AI will likely be the next frontier that is used offensively by and against nations.

The scariest part of cyber warfare involving AI is the rapid iterations it can produce, which will only improve.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
Should it be regulated? Absolutely.

That said, AI will likely be the next frontier that is used offensively by and against nations.

The scariest part of cyber warfare involving AI is the rapid iterations it can produce, which will only improve.

In the movie Oppenheimer (not for kids!) both Einstein and Robert
Oppenheimer struggle with the advances in physics that they oversaw,
that opened the door to an atomic bomb, then to a super atomic bomb
(the hydrogen bomb). And they were right, to morally struggle over this.

Unfortunately, the big software companies that are developping AI products,
are producing products that can be used for purposes (hundreds of them),
for which it is IMPOSSIBLE to test them, for safety or for lawful use.

YET, the big software companies are not agonizing over the possible
destructive use of their AI products.

AI products, like malicious software packages, could be used to collapse
the lawful use of the Internet for many useful purposes: they could be used to
massively manipulate social media users, and spread conspiracy theories and
lies at lightning speed.

Yet, the billionaires who own these companies, are not concerned over the
destructive power of the software they are developping. And they are concerned
only with earning money, and creating monopolies of social media sites, etc.

Legislators need to stop kissing up to the financial perks offered by billionaires,
and start legislating that the big software companies be held responsible for the
destructive use of their software. STOP ACTING as if the big software comanies
did not realize what criminals would so with their AI products!!!!
 
Upvote 0

DennisF

Active Member
Aug 31, 2024
373
84
74
Cayo
✟23,227.00
Country
Belize
Gender
Male
Faith
Christian
Marital Status
Married
Should it be regulated? Absolutely.

That said, AI will likely be the next frontier that is used offensively by and against nations.

The scariest part of cyber warfare involving AI is the rapid iterations it can produce, which will only improve.
The more basic problem of government regulation is: Who shall guard the guardians?
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
The more basic problem of government regulation is: Who shall guard the guardians?

This is a type of useless, rhetorical question.

In a democracy like America, the Congress, debates, then passes laws.
It is not Congress, who owns the big AI companies.
So the regulation is apart from the ownership concerns.
 
Upvote 0