- Mar 14, 2023
- 1,425
- 552
- 69
- Country
- United States
- Faith
- Catholic
- Marital Status
- Private
Microsoft and a16z set aside differences, join hands in plea against AI regulation | TechCrunch
Two of the biggest forces in two deeply intertwined tech ecosystems — large incumbents and startups — have taken a break from counting their money to
The debate about whether or not (software-based) artificial intelligence
products should be regulated by the government, and how, is heating up.
Christians need to be seriously considering what the real issues are.
So, what are the real issues?
From the standpoint of an M.S. in computer Science and AI, I would assert
that the real issues are...
1 The new "AI" tools are the weakest form of AI as described and studied by
Computer Science. BUT, they have an AWFUL potential to be used to AUTOMATE
actions that a human being previously would do. This is a potential danger of
automating types of online deception and spreading false information and
automating types of criminal operations (and warfare).
2 None of the AI software developers has bothered to integrate a Moral-Ethical
model into their product. This is like giving a loaded assault rifle, to your
5 year old kid. (And, here we go -- this steps into the discussion space of
"Guns don't kill people -- people do!").
---------- ----------
Supplemental reading on this...
(c)
"Expressivity is the degree or the type of qualities that a notation could express.
Big Idea
Applied logics are designed to reason about narrow ranges of topics. Mathematics reasons about numeric quantities. Biology reasons about “laws of nature” as they apply to living things. Physics reasons about energy and heat and travelling in 4 dimensions, what “gravity” is, etc. None of these systems of notations has the variables to reason about morality/ethics. Most applied logics cannot express many concepts that are very important to our lives (such as what a fair rule of law is, or what justice is).
Big Idea
In contrast, Philosophy is an area of reasoning that includes all that could be. This is why philosophy so often relies on formal logic, which is the most general form of rigorous reasoning. Using this, philosophy can reason about epistemology (“what is truth”, “what makes a statement true?”, “what is evidence for or against the idea that a proposition is true?”, etc.), moral theory (“what is good?,” “what is evil?,” “what is fair and just?,” “what are we responsible for?”, etc.), and valid methods of reasoning— including formal logic.
Insight
Legal systems, and lawyers, use the more general reasoning inherited from philosophy (formal logic). For this reason, lawyers and laws deal with abstract concepts and values/ vices that the applied logics in science cannot express.
Some of these concepts are property, human rights, lists of actions that are evil (lying, stealing, murder, violence, deceit, breaking agreements, etc.).
Without the expressivity of the language used in philosophy, there would be no fair rule of law or justice." [Christian Logic, 158-159]
What this is saying is that the hard sciences do not have the VOCABULARY
to EXPRESS morality-ethics. Philosophy (in the discipline of Moral Theory)
does. But if the hard sciences (or computer programs) wish to incoporate
morality-ethics into their products, it must be IMPORTED from outside the
hard science discipline. Our fair rule of law does import morality. BUT, the
large software developers DO NOT IMPORT morality into their products.
3 It is VERY DIFFICULT to develop a practical moral-ethical (ME) model in
software. Computers are not able to think abstractly, and there are uncountably
infinite ways in which an AI product could potentially be used for criminal
(Immoral/unethical) purposes.
It would be easier to limit AI products to people who passed a strict ME background
check (this is the approach taken by the Department of Defense, regarding military
weapon research), however this limitation would not be attractive to the big software
developers, who are concerned with making money, not the safety of the nation.
---------- ----------
As the article states, from the viewpoint of the big software companies...
"Regulations should have “a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology,” and should “focus on the risk of bad actors misusing AI,” write the powerful VCs and Microsoft execs. What is meant by this is we shouldn’t have proactive regulation but instead reactive punishments when unregulated products are used by criminals for criminal purposes.
This approach worked great for that whole FTX situation, so I can see why they espouse it.
“Regulation should be implemented only if its benefits outweigh its costs,” they also write. It would take thousands of words to unpack all the ways that this idea, expressed in this context, is hilarious. But basically, what they are suggesting is that the fox be brought in on the henhouse planning committee.
Regulators should “permit developers and startups the flexibility to choose which AI models to use wherever they are building solutions and not tilt the playing field to advantage any one platform,” they collectively add. The implication is that there is some sort of plan to require permission to use one model or another. Since that’s not the case, this is a straw man."
---------- ----------
Conclusions:
The problem with the verbage from the big software companies, is that
the power and flexibility of new AI software products, means that it cannot
be controlled or made safe with the standard historical methods of controlling
danger (such as warning labels, on cigarette boxes).
And the suggestion by the big software developers that regulation should only
RESPOND to misuses of AI, and NOT BE PROACTIVE (with regulation),
is RIDICULOUS. The software companies acknowledge that it is impossible
for regulators to list ALL THE WAYS in which AI software could be misused.
But this is a red herring.
If there are uncountably infinite ways in which this AI software could be abused,
then the solution is to restrict the AI products, not to try to engage in a few
useless warning labels.
The new AI software products are wonderful tools for the national crime syndicates,
to use to automate all sorts of criminal behavior (including the overthrow of the
American government, through the generation of false information on social media
platforms, or attacks on U.S. infrastructure that is (unfortunately) hooked up to
the World Wide Web.
The big AI software developers, are not dealing with the real danger of the products
that they are making.
Just as Elon Musk has been incompetent to enforce fact-checking on his X social
media network, and Donald Trump has made his Truth Social network a hotbed
of ridiculous conspiracy theory generation, so too the large software companies
was to be unrestrained to make a mountain of money, regardless of the devastation
they cause ion the lives of American citizens.
The AI software products, and the social media companies, need serious regulation
in order to keep them from becoming dedicated tools of crime syndicates.
[Christian Logic]. Christian Logic, Stephen Wuest, 2024, Christian Faith Publishers