- Mar 14, 2023
- 1,425
- 552
- 69
- Country
- United States
- Faith
- Catholic
- Marital Status
- Private

You Don’t Need Words to Think
Brain studies show that language is not essential for the cognitive processes that underlie thought
(c)
Whether or not language is needed for human thought processes, or not, is an important question.
But, I'm not sure that this article is getting to the core of the question.
The author does not seem to differentiate between the surface encoding of human language, such as text, and the MEANING that our minds associate with those notations. While software with some knowledge of a human language (and grammar) may seem to "know" human language, I doubt that they have the sophisticated sort of "thinking" ability using this encoding, that educated human beings have.
In other words, while grammar can tell a machine how to organize words, grammar doesn't have anything to do with the human ability to do complex problem solving, using the interface of language. There is still a leap of power, between being able to make grammatically correct patterns in human language words, and being able to use problem solving methods (or algorithms) to reason about a problem.
Children can (amazingly) learn the grammar of human speech, and so learn human language. But, they can put out grammatically correct sentences, BEFORE they know what the words MEAN.
Then, children have to learn a deeper method of reasoning, as they reason about the meanings of multiple sentences. I doubt that most of the "AI" software applications you will see, are able to reason about the meanings of multiple sentences, in the amazingly complex ways in which human beings (some human beings) can reason about concepts.
---------- ----------
There are different levels of human reasoning. Or, you could say, there are different types of building blocks.
We have words, which bear meanings.
We have phrases, that can associate characteristics with one or more things.
We have "rules" that can express relationships (such as causality).
We have "categories", that can express (somewhat) what is LIKE something else, or what is NOT LIKE something else.
We have basic syllogisms, such as the ancient Aristotelian patterns of statements, or the unlimited modern symbolic logic.
But we have that really difficult to define ability (if we have developed it) that brings to ur consciousness whether or not a concept/definition or rule base or syllogism is actually RELEVANT to solving a specific problem, or whether it is just conspiracy theory slogans that do not quite connect with our shared reality.
And, if we are not a politician, we have a growing ability to detect the characteristics of our shared reality, and when some proposition of system of though matches our shared reality, or whether it is "bearing false witness" (aka lying) about our shared reality. Try building THAT into a modern AI application.
Then, civilized human beings have a functional moral-ethical model, which can instruct us what we morally-ethically OUGHT to do, or OUGHT NOT to do. This is an added layer of logic that should be working, when we face a decision with multiple outcomes. (Such as, do I buy a new jacked up pickup truck, or buy some good foundational books and start a personal library, and read?) Kant would say that we have the ought to perfect ourselves. Trump thinks that getting richer and more powerful, and looking like a really flashy negotiator, is more important than upholding the fair rule of law, and defending human rights.
Without a functional moral-ethical model, humans cannot make moral-ethical decisions about what we should value, what our goals should be, and how we should spend the resources that we have. Without a moral-ethical model we cannot, for example, value keeping to a fair rule of law, due process in a court, and respect and protect of the concepts in the U.S. Constitution and Bill of Rights. Without a moral-ethical model, we cannot claim rightfully that we are not lawless.
Note that the hard sciences do not even have the vocabulary to express morality-ethics.
---------- ----------
Note that many points brought up here, have never substantially been attempted to be built into the "AI" software that is flooding the market.
According to the inventor/definer of Artificial Intelligence, Computer Science, AI is the emulation of complex human problem solving. And reasoning about morality-ethics involves some of the most complex problem solving that human beings do. By this definition, the modern AI software is really not living up to its label. Having no functional moral-ethical model, this new AI software is the perfect automated tool for criminal gangs to terrorize the Internet, as there are no moral-ethical constraints built into the products.
---------- ----------
Articles like this point out something pretty obvious -- that there are deeper and hidden methods of reasoning than surface language -- but they do not get at the other complex abstract concepts that human beings use to reason.