- Feb 5, 2002
- 185,796
- 68,324
- Country
- United States
- Gender
- Female
- Faith
- Catholic
- Marital Status
- Married
- Politics
- US-Others
There is an insurmountable difference between man and machine.
AI raises an endless stream of questions: Will it take our jobs? Cure diseases? Destabilize governments? Make us rich — or make us targets for hackers and terrorists?
Beneath all these loud debates sits a quieter and far more basic one: Can AI truly think? Perhaps it can “think” or “perceive” in some metaphorical sense, just as a spell-checker “thinks” you misspelled a word even when you didn’t. But does it possess real understanding — an interior awareness of itself and the world, like the awareness we find in ourselves? Or is it simply a well-designed circuit shuttling electrical signals about, generating the appearance of perception, reasoning, and self-expression, yet wholly unaware of anything it does?
Current large language models — LLMs such as ChatGPT — produce text by computing the probabilities of various next tokens (small units of text) given the preceding context, using neural-network parameters that encode statistical patterns learned from their training data. But predicting the next token from patterns among words is not the same thing as grasping the meaning of those words. Impressive as they are, LLMs plainly understand nothing, given how they operate.
Continued below.
www.ncregister.com
AI raises an endless stream of questions: Will it take our jobs? Cure diseases? Destabilize governments? Make us rich — or make us targets for hackers and terrorists?
Beneath all these loud debates sits a quieter and far more basic one: Can AI truly think? Perhaps it can “think” or “perceive” in some metaphorical sense, just as a spell-checker “thinks” you misspelled a word even when you didn’t. But does it possess real understanding — an interior awareness of itself and the world, like the awareness we find in ourselves? Or is it simply a well-designed circuit shuttling electrical signals about, generating the appearance of perception, reasoning, and self-expression, yet wholly unaware of anything it does?
Current large language models — LLMs such as ChatGPT — produce text by computing the probabilities of various next tokens (small units of text) given the preceding context, using neural-network parameters that encode statistical patterns learned from their training data. But predicting the next token from patterns among words is not the same thing as grasping the meaning of those words. Impressive as they are, LLMs plainly understand nothing, given how they operate.
Continued below.
AI Has No Soul — and Never Will
COMMENTARY: There is an insurmountable difference between man and machine.