• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

An Example of an "AI Mud" Article

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
551
69
Southwest
✟100,185.00
Country
United States
Faith
Catholic
Marital Status
Private

As many of the core Intelligent Design authors have pointed out, often
examples in nature that are claimed to be demonstrations of the intelligence
of nature, have the intelligence "front-loaded" into the example. They are not
really examples of nature being able to produce complex specified information.

This article is a confused ramble through ideas that are REALLY NOT CAREFULLY
THOUGHT OUT, FROM AN ALGORITHMIC POINT OF VIEW.

The article is about the increasing ability of human designers to create more
compressed versions of databases. And, the assertion is, that this will allow
"AI" features on devices that have a limited ability to store data (instead of
requiring AI products to run on giant computer farms).

There are a number of disconnects, or sleights of hand, in the article...

--- The amount of data scanned by an algorithm, does not determine the
"intelligence" of the algorithm. This is what the article is suggesting.

--- The data scanned by the "machine learning" AI algorithms, must already
be vetted by a human expert. This intelligence is front-loaded. A human
expert must identify what data is RELEVANT to solving the problem. And,
human experts must front-load what an ANSWER to the problem must look
like (this is the core, of machine learning algorithms that use "truthed"
data to "learn".

--- There is the tacit problem of a human expert identifying the "authority"
of certain types of inputs. Without this front-loading of intelligence, an
algorithm could be searching Tucker Carlson's conspiracy theories, in order to
find an "authoritative" answer to a query. Or, the algorithm could be searching
a propaganda news source, such as Russia Today.

--- The point of the article, is that algorithms could search highly compressed
databases, in order to get (almost) the same answers as "fully trained AI
networks". Note that the intelligence in this sort of algorithm, has been front-
loaded into the "fully trained AI networks". Some human being has already
vetted the database as relevant to some problem, and has already identified
the type of answer that is being sought (this is not really what intelligent
human researchers do: they are free to ask what sort of category of answer
may work -- they do not assume that they know this already).
------------ ----------

Basically, articles like this ignore the long history of Computer Science,
in defining what AI is ("the emulation of complex, human problem-solving"),
and bypasses the ALGORITHM DESIGN for these emulations, and focuses on
database compression (which has NOTHING to do with the design of really
intelligent algorithms, that emulate complex human problem-solving).

Articles like this ignore the requirement that you would need thousands of these
compressed databases, for a machine learning algorithm to solve thousands of
interesting queries. And EACH of these databases must have human intelligence
front-loaded with "relevant" and "authoritative" examples of solutions.

This endless front-loading of intelligence has nothing to do with the Computer
Science definition of artificial intelligence. But, articles like this, are used to snow
consumers into thinking that algorithms that emulate complex human complex
problem-solving, will be reachable if we just can create more efficient data
compression databases.
 

mindlight

See in the dark
Site Supporter
Dec 20, 2003
14,247
2,990
London, UK
✟971,928.00
Country
Germany
Gender
Male
Faith
Christian
Marital Status
Married
Yes draw a target around the results and you look right every time. They use such examples to demonstrate the intelligence of AI and makes us wonder why we bother to think for ourselves at all. But AI could not survive a day outside in the real world, it would be run over by traffic, fall from high places, eat the wrong food and offend the wrong people even before the server farm blew up trying to duplicate the human brain.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
551
69
Southwest
✟100,185.00
Country
United States
Faith
Catholic
Marital Status
Private

GM should have consulted credible Computer Science algorithm people,
on the computationallyintractable problems involved in Artificial Intelligence
algorithm design.

Human free will, is not a simple deterministic system. Unless the U.S.
radically changes its approach to who can drive a car, eliminating altogether
human drivers (this would make AI driving modules possible), there will
probably NEVER be ANY AI software that can PREDICT what human drivers
are going to do, on the road.

This stupid, uninformed charge by software companies to produce AI
sofware packages that can jump in, and provide automated services
within an environment that still includes real human decisions, is
an ignorant (and costly) MISTAKE.

Being gung-ho about technology, DOES NOT SOLVE THE COMPUTATIONALLY
INTRACTABLE problems of many "AI" problems. Throwing billions of dollars,
ignorantly, at automated algorithms that supposedly will solve what Computer
Science has proven to be computationally intractable problems, is just
ignorant. GM, and Elon Musk, and every other ignorant gung-ho actor,
had better get in touch with serious algorithm design, in Computer Science.
 
  • Like
Reactions: mindlight
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,719
4,438
71
Franklin, Tennessee
✟280,032.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
Yes draw a target around the results and you look right every time. They use such examples to demonstrate the intelligence of AI and makes us wonder why we bother to think for ourselves at all. But AI could not survive a day outside in the real world, it would be run over by traffic, fall from high places, eat the wrong food and offend the wrong people even before the server farm blew up trying to duplicate
Thank you!
 
Upvote 0