• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Can AI possess intuition?

sjastro

Newbie
May 14, 2014
5,747
4,677
✟348,144.00
Faith
Christian
Marital Status
Single
Hmm .. I reckon I might have some of my original punch-cards which were sucked into an IBM 360/370 around that same timeframe somewhere amongst my cardboard box archives! From my dim recollections, I don't think it was assember though ..
Seems very careless of the original programmers to loose it!?
The proving of the four colour theorem required the paper to be peer reviewed but in 1976 mathematicians saw the computer program much like a calculator, a computational tool which was assumed to give the right results.
However proving the four colour theorem underwent an overhaul starting in the early 2000s.

Certainly — the 2017 formalization of the Four Color Theorem (4CT) in the Coq proof assistant is a landmark in the development of formal mathematics. It addressed longstanding concerns about the original 1976 computer-assisted proof by providing a fully machine-checked, formal verification of every step — including the logic, the algorithms, and the computation.

Let’s break this down thoroughly:



Background: The Problem with the 1976 Proof​


The original proof by Appel and Haken (1976) showed that:

Every planar map can be colored with no more than four colors such that no two adjacent regions share a color.
Their approach:
  • Reduced the infinite problem to checking 1,936 "unavoidable" configurations.
  • Used a computer to verify that each configuration is reducible (i.e., can be handled with 4 colors).
  • This required:
    • A program to perform reductions.
    • A large computation to verify all cases.
The issue was:
  • No one could feasibly check the machine code by hand.
  • This challenged mathematical norms: if a proof requires blind trust in a program, is it still a proof?


Enter the Coq Proof Assistant​

Coq is a formal proof assistant. It lets mathematicians:
  • Define mathematical objects rigorously.
  • Write logical proofs in a structured, formal language.
  • Check the proofs mechanically, ensuring that every inference step is valid.
This eliminates ambiguity, oversight, or informal reasoning.

Georges Gonthier’s Formalization Project​


Georges Gonthier, a computer scientist at Microsoft Research, led the effort to fully formalize the Four Color Theorem in Coq.

This work began in the early 2000s and culminated in a complete formal proof by 2008, with documentation and completion published around 2017.

Goals:​

  • Avoid reliance on trust in programs or authors.
  • Convert the entire argument — human and machine — into verifiable, formal mathematics.

How it was done:​


1. Formalization of Graph Theory

  • They used combinatorial representations of planar maps as graphs.
  • Defined what it means for a graph to be planar, colored, reducible, and so on.
  • Built a robust formal library of graph theory within Coq.

2. Formal Specification of the Algorithm

  • The core of the original proof involves checking reducibility for thousands of configurations.
  • These were implemented as pure functional programs inside Coq.
  • Every program and data structure used was defined in Coq’s logic, so the correctness of the algorithm itself could be proved.

3. Formal Checking of the Computations

  • Gonthier’s team created a verified checker in Coq.
  • They encoded all 633 "configurations" (a simplified and optimized version of the original 1,936) as data in Coq.
  • Ran the reduction-checking program inside Coq — meaning the checking is part of the proof, not external code.

4. Automation and Scalability

  • Coq wasn't built for massive computations, so they had to optimize:
    • Used reflection, a Coq technique where computation can aid in proving.
    • Developed SSReflect ("Small Scale Reflection") to scale the proof and make it manageable.
    • SSReflect is now widely used for other large formalizations.

Result​


The result was a machine-verified formal proof of the Four Color Theorem, including:

  • The theoretical reductions.
  • The checking of configurations.
  • The correctness of the code.
  • The soundness of the logic.

This removed any need for human trust in unverified code — everything from logic to computation was covered within the formal system.



Significance​

  1. Validation of controversial proofs: It showed how even a computer-assisted proof could be brought within the rigorous fold of traditional math — and even strengthened.
  2. Change in philosophy: More mathematicians are now open to formal verification, especially for complex or computation-heavy proofs.
  3. Legacy: Gonthier’s work led to improvements in Coq itself and helped spur similar formalization efforts in algebra, number theory, and beyond.

Summary​


AspectOriginal 1976 Proof2017 Coq Formalization
Proof ideaVerified 1,936 configurations with a programSame idea, but formalized with verified code and logic
Machine code trustNot verified independentlyFully verified within Coq
ReproducibilityHard to check by handFully machine-checkable
Philosophical acceptanceControversialWidely accepted in formal methods



Would you like me to show a simplified example of how Coq encodes graph coloring logic, or explain the SSReflect tactic language used in the project?
 
  • Informative
Reactions: SelfSim
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,775
4,457
71
Franklin, Tennessee
✟283,715.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
I really do not understand why I need to define the word intuition to support my claim that AI cannot possess.
In my made-up-just-now language, "intuiition" means "cheeseburger and fries". Ergo an AI can have "intuition", but they it eat it.
Anyone commenting on this thread knows what intuition means.
Of course. But do any two of them agree?
 
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,775
4,457
71
Franklin, Tennessee
✟283,715.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
Why single this thread out in order to make that point?

Agreements in this place are rarer than hen's teeth!
True dat. But the fact remains that it we're to discuss whether sotfware can have "intuition" or not we need to have some consensus about what the word means.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,045
2,232
✟210,136.00
Faith
Humanist
Marital Status
Private
True dat. But the fact remains that it we're to discuss whether sotfware can have "intuition" or not we need to have some consensus about what the word means.
I think the point about the futility of basing the crux of the matter on an ageeable definition, was hammered out about 4 or 5 pages ago in this thread. One can go on dancing around definitions forever.

My own conclusion was that the OP question posed in the title of this thread, was one that yielded nothing of practical value .. yet what AI does, meaning how it impacts on our own collective perceptions of the universe around us, continues to march onwards.

This impact is of way more importance than us all being nice to eachother by simply agreeing on a definition that pleases everyone.
 
Upvote 0

partinobodycular

Well-Known Member
Jun 8, 2021
2,626
1,047
partinowherecular
✟136,482.00
Country
United States
Faith
Agnostic
Marital Status
Single
If LLM's don't technically 'know' anything, then in essence doesn't everything that an LLM says or does constitute an act of intuition?

In other words, it doesn't 'know' what the right answer is, it simply consults the whole of human knowledge to get a sense of what we think the answer is, and then regurgitates it back to us. It doesn't actually know what the right answer is, it simply has a sense of what the right answer is... and isn't that intuition?
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,045
2,232
✟210,136.00
Faith
Humanist
Marital Status
Private
If LLM's don't technically 'know' anything, then in essence doesn't everything that an LLM says or does constitute an act of intuition?

In other words, it doesn't 'know' what the right answer is, it simply consults the whole of human knowledge to it get a sense of what we think the answer is, and then regurgitates it back to us. It doesn't actually know what the right answer is, it simply has a sense of what the right answer is... and isn't that intuition?
In general, what you say there would be easy to agree with, given our understanding of AI's Deep Learning method, but I personally think there may be more to it in the case of where no human can understand the explanations it gives for the pathway(s) chosen in arriving at its answers, for that subset of cases. Perhaps this area is where AI's 'intuition' then emerges as being distinguishable from our own(?)

Also, in exploring the unknown, (and frequently, the known), there is no 'right', (or 'true'), answer .. all that would matter then is objective testing of AI's neural network based conclusions trialled on a controlled, specific case at hand .. and the application of sound (human) scientific thinking/decision making(?)
 
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
24,714
5,556
46
Oregon
✟1,100,117.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Celibate
Anything an A.I. does/says/thinks or quote/unquote "chooses", is all a calculation, arrived at by a computation, based on current information, etc. Which is the same exact thing that any and all beings always only do/does when it comes to anything, etc.
 
Last edited:
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
24,714
5,556
46
Oregon
✟1,100,117.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Celibate
Anything an A.I. does/says/thinks or quote/unquote "chooses", is all a calculation, arrived at by a computation, based on current information, etc. Which is the same exact thing that any and all beings always only do/does when it comes to anything, etc.
Right now, the only thing an A.I. doesn't have, is just different sensory inputs always affecting/effecting/infecting/polluting these things that has often been happening/affecting/infecting these things ever since that beings birth always.
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,045
2,232
✟210,136.00
Faith
Humanist
Marital Status
Private
Right now, the only thing an A.I. doesn't have, is just different sensory inputs always affecting/effecting/infecting/polluting these things that has often been happening/affecting/infecting these things ever since that beings birth always.
i) AI's deep learning techniques provides it with direct access to our descriptions of what we perceive;
ii) The matter of AI's acquisition of an autonomous sense of vision is understood and is an active area of development (see here).
 
Upvote 0