- Mar 14, 2023
- 1,367
- 541
- 69
- Country
- United States
- Faith
- Catholic
- Marital Status
- Private

U.S. Pushes for Less AI Regulation at Paris Summit
At the Paris AI Action Summit, safety concerns took a backseat to optimism.
FINALLY, the world is beginning to pay attention to the potential and the RISK
of AI tools.
"Although there were divisions between major nations—the U.S. and the U.K. did not sign a final statement endorsed by 60 nations calling for an “inclusive” and “open” AI sector—the focus of the two-day meeting was markedly different from the last such gathering. Last year, in Seoul, the emphasis was on defining red-lines for the AI industry. The concern: that the technology, although holding great promise, also had the potential for great harm.
But that was then. The final statement made no mention of significant AI risks nor attempts to mitigate them, while in a speech on Tuesday, U.S. Vice President J.D. Vance said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.” "
Note that AI technology SHOULD trigger the same discussion that was presented
in the movie "Oppenheimer", where both Einstein and Oppenheimer are agonizing
over the power of the theories that they were proving, and wondering what sort of
world the atomic bombs would lead to.
Note that there is no reason to BOTH continue research in AI algorithms, in a
protected environment, AND think about controlling the power of this software
from the use of rogue nations and national criminal gangs. There is no logical
reason to argue that these are XOR options.
---------- ----------
Some very basic observations need to be spoken again (and again)...
1 There is the assertion that the generative AI tools operate on the level
of a "competent graduate student". I would note that
a. a competent gradutate student is able to show references to what he claims
are facts (which these tools regularly don't do)
b. most competent graduate students in STEM, HAVE NO IDEA WHAT MORAL-
ETHICAL MODELS are, as the hard science don't even have the vocabulary
to reason about ME right and ME wrong.
Many competent STEM graduate students, like some prominent politicians,
don't seem to recognize that ME wight and ME wrong even exist, because they
cannot be described with mathematical models.
2 Look practically at how the "popular" model of knowledge works -- Wikipedia.
Writers of entries in this tool are rated (by Wiki algorithms) according to how
many entries they make, not on their competence.
In many Wiki entries, the references are hopelessly recent, ignoring the older
references to the topic. No competent graduate student, would write entries
like this.
3 Graduate students are typically young, and engage in a narrow discipline.
What they often miss, is the bigger horizon of cross-discipline knowledge.
4 Increasingly, younger Americans are dysfunctional in their knowledge of
philosophical primitives, and formal logic. To them (increasingly) a
proposition is TRUE or False based on an Appeal to Authority to someone
who ASSERTS a conclusion, not to a rigorous methodology of reasoning.
---------- ----------
I would repeat AGAIN...
IF the generative AI tools can emulate complex human problem solving,
THEN they ought to be able to be easily employed to do FACT-CHECKING on
on all the major social media sites. Doing fact checking, is not an
infringement on free speech. But it is a check on BullSpeak. And America
greatly needs this check.
Those who throw out fact checking on their social media sites, or try to
equate fact checking with a limitation of American citizens rights, are the
billionaires who want to use social media sites as propaganda outlets.
---------- ----------
Another point about the current AI tools needs to be made...
*** The "machine learning" tools, are (in general) the weakest form of
AI algorithms. That means, the artificial neural nets, that have to be trained
on a specific body of material, in order to put out some answer.
These weak algorithms often use BRUTE FORCE computing, which is why the
billionaires who are developping them, also are interested in owning massive
computing farms, and independent sources of electricity generation.
IF they hook you on this brute force approach, they will create a computational
MONOPOLY in computing farms, and still the answers that the AI tools are fed
from these computational farms, will be unregulated.
Not only will the "computing farm" approach lock Americans into the weakest
form of AI algorithms (this is bad for the development of AI technology), but it
will create a financial monopoly for those who own the computing farms.
American researchers need to be focussing on the much more powerful
categorical reasoning algorithms in AI, although they involve MUCH MORE
development work. This, the billionaires don't want to do.