- Mar 14, 2023
- 1,425
- 551
- 69
- Country
- United States
- Faith
- Catholic
- Marital Status
- Private

DNA to AI: How Evolution Shapes Smarter Algorithms - Neuroscience News
A new AI algorithm inspired by the genome’s ability to compress vast information offers insights into brain function and potential tech applications.

As many of the core Intelligent Design authors have pointed out, often
examples in nature that are claimed to be demonstrations of the intelligence
of nature, have the intelligence "front-loaded" into the example. They are not
really examples of nature being able to produce complex specified information.
This article is a confused ramble through ideas that are REALLY NOT CAREFULLY
THOUGHT OUT, FROM AN ALGORITHMIC POINT OF VIEW.
The article is about the increasing ability of human designers to create more
compressed versions of databases. And, the assertion is, that this will allow
"AI" features on devices that have a limited ability to store data (instead of
requiring AI products to run on giant computer farms).
There are a number of disconnects, or sleights of hand, in the article...
--- The amount of data scanned by an algorithm, does not determine the
"intelligence" of the algorithm. This is what the article is suggesting.
--- The data scanned by the "machine learning" AI algorithms, must already
be vetted by a human expert. This intelligence is front-loaded. A human
expert must identify what data is RELEVANT to solving the problem. And,
human experts must front-load what an ANSWER to the problem must look
like (this is the core, of machine learning algorithms that use "truthed"
data to "learn".
--- There is the tacit problem of a human expert identifying the "authority"
of certain types of inputs. Without this front-loading of intelligence, an
algorithm could be searching Tucker Carlson's conspiracy theories, in order to
find an "authoritative" answer to a query. Or, the algorithm could be searching
a propaganda news source, such as Russia Today.
--- The point of the article, is that algorithms could search highly compressed
databases, in order to get (almost) the same answers as "fully trained AI
networks". Note that the intelligence in this sort of algorithm, has been front-
loaded into the "fully trained AI networks". Some human being has already
vetted the database as relevant to some problem, and has already identified
the type of answer that is being sought (this is not really what intelligent
human researchers do: they are free to ask what sort of category of answer
may work -- they do not assume that they know this already).
------------ ----------
Basically, articles like this ignore the long history of Computer Science,
in defining what AI is ("the emulation of complex, human problem-solving"),
and bypasses the ALGORITHM DESIGN for these emulations, and focuses on
database compression (which has NOTHING to do with the design of really
intelligent algorithms, that emulate complex human problem-solving).
Articles like this ignore the requirement that you would need thousands of these
compressed databases, for a machine learning algorithm to solve thousands of
interesting queries. And EACH of these databases must have human intelligence
front-loaded with "relevant" and "authoritative" examples of solutions.
This endless front-loading of intelligence has nothing to do with the Computer
Science definition of artificial intelligence. But, articles like this, are used to snow
consumers into thinking that algorithms that emulate complex human complex
problem-solving, will be reachable if we just can create more efficient data
compression databases.