People use "AI" to describe almost anything. And those definitions rarely correlate with Computer Science definitions.
The development of AI, and AI algorithms, is within the discipline of Computer Science.
According to Computer Science, Artificial Intelligence is "the emulation of
complex human problem-solving".
Note that emulation, is not simulation. The neural nets that are so popular,
were an attempted simulation of brain activity, and have turned out to be
so hopelessly simplistic, in contrast to the human brain, that they are very
weak in the emulation of complex human problem solving.
"Complex" also needs some explaining. Complex involves high dimensionality,
problems that do not have clear rules to solve them, value judgments,
and creativity. Complex does not mean doing a massive number of calculations,
or calculations involving a few well-defined rules. In this sense, chess is not
a "complex" game. It has well-defined rules, and playing it well takes a lot
of brute processing power, and a lot of memory. But, it's not a particularly
"complex" problem.
Doing millions of mathematical calculations, faster than a human being, is
not a "complex" problem.
Automating a task that a human being formally did, is rarely an example
of artificial intelligence. It's just automating a task that a human being formally
did. This is what human computer programmers do every day.
Note that, contrary to the expectation of many mathematical people, most of
the really difficult problems cannot be solved with a closed form mathematical
formula. To roughly quote Einstein, "God gave all the easy problems to the physicists."
Really difficult problems often involve reasoning about concepts that cannot be
numerically quantified (such as values, and virtues, and vices).
The big software companies HAVE NOT built moral-ethical models into their AI
software, because this would be so expensive, and involve problems that have
not been algorithmically solved. AND, trying to do this would imply that these
companies are morally-ethical responsible for their products being used to
systematically destroy the lives of people, or rob them blind. Emulating the
complex problem-solving behavior of human beings, must require also moral-
ethical reasoning.
The definition of AI, is not made statistically. Truth is not the opinion that
statistically, is the most common opinion. Philosophy has always recognized this
(in the ad populam fallacy). And so, software that is a massive search engine,
CANNOT claim to be emulating the best human thinking, if all that it does is
spit out the Most Common Opinion.
If truth is not the Most Common Opinion on some matter, then you must solve the
problem of identifying who or what is the Authority to decide what is true, in any
area of knowledge. AND, how many areas of knowledge there are. AND what
"logic" must be used to adjudicate this answer. Right now, the machine learning
algorithms are controlled by software written by some computer programmers.
HOW does anyone know that THEY are the Authority to decide what is true, in
most areas of knowledge????
The current AI tools can scan masses of text, but probably cannot put forward
a coherent philosophy of Epistemology, or a coherent algorithm to determine who
is an authority in any area of knowledge. The current AI tools seem to be able to
make up references, to try to justify their conclusions. This is insane, and has
nothing to do with determining what is true.
The current AI tools, as far as I know, cannot reason about what "our shared reality" is.
The current AI tools, as far as I know, cannot reason about what the Bible projects
as our shared reality.
Without a very clear idea of what our shared reality is, then you cannot define
what "bearing false witness" about our shared reality, is. You cannot define
what lying is. (This is a common problem among those who live in some little
fake conspiracy theory reality.)
The current AI tools cannot intelligently talk about what the definition of a God, who
is the best that could be, would be like. Or, what a dysfunctional model of God would be.
The current AI tools, could not coherently argue what the best society should be
like, or what a fair rule of law should be, or what justice is.
Note that the ancient legacy of the philosophers, could all argue coherently
about these topics. Even if they did not always reach Christian conclusions.
Most of the AI algorithm designs, cannot offer internal traceability for their
Answers. Why anyone would believe those answers, is beyond insane.
These are only the "beginning of woes", when considering that the current
AI tools, do not really meet the stringent definition of "Artificial Intelligence".