• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Can AI possess intuition?

Ophiolite

Recalcitrant Procrastinating Ape
Nov 12, 2008
9,243
10,140
✟285,048.00
Country
United Kingdom
Faith
Agnostic
Marital Status
Private
Which is a long way of saying that I didn't intend to treat you with disrespect, it's just that, uh, I actually know things about this topic, or at least I think I know
Clearly my explanatory, paranthetic remark about tone, has done the reverse of what I intended. I used the word tone to convey a sense of the plethora of features that are part of any language communication. I imagine there is a proper technical term for it. I wished to emphasise that I was not using tone in the sense it is used in phrases like, "I don't like your tone".

Consequently, your perception that I thought you had been disrespectful to anyone was wrong. Quite the reverse; you have been respectful in a direct, unassumed manner, one that does not ape the forms of respect, but accords it in a natural, genuine fashion. Thank you for that.

Thank you also for acknowledging that calling your thoughts a theory may not apply in a scientific sense. That simple acknowledgment frees me up read more intensively what you have written, with considerable interest. That is something that my - let's call it an obsession - with using the the word theory appropriately would not have let me do otherwise.

For me, AI is probably the most interesting thing that has happened in the last quarter century. (Or possibly my lifetime, which is three quarters of a century.) So, if you don't mind I might (very) occassionally fire a question on AI in your direction via pm. If you'd rather not, that's fine.

[Aside: I probably abused the word theory as much as anyone else in my youth. Perhaps I now seek to make amends.]
 
Upvote 0

linux.poet

out of love attunement
Christian Forums Staff
Purple Team - Moderator
Angels Team
CF Senior Ambassador
Site Supporter
Apr 25, 2022
5,547
2,278
Poway
✟380,496.00
Country
United States
Gender
Female
Faith
Christian
Marital Status
In Relationship
Politics
US-Republican
Clearly my explanatory, paranthetic remark about tone, has done the reverse of what I intended. I used the word tone to convey a sense of the plethora of features that are part of any language communication. I imagine there is a proper technical term for it.
Emotional connotation of linguistics. Either that or "linguistic features". In my case, "poetics" could also apply since I sense a rhythm take hold of my writing even when I am not placing rough line breaks to make the rhythm apparent, and instead use sentences and commas.

Linguistics is the scientific study of language, and while I do not technically have a professional qualification in that field, I find linguistics data to be useful in the acquisition and understanding of foreign languages.
I wished to emphasise that I was not using tone in the sense it is used in phrases like, "I don't like your tone".
That is good; that form of commentary is an evidence of psychological codependency, which I am glad you have been able to avoid.

It is difficult for me to reread your previous post before this in light of this new one because "surprise at the debate" still strikes me as what my word choice and grammar selection was what I meant to convey with my post. But I will try to go with it.

Thank you also for acknowledging that calling your thoughts a theory may not apply in a scientific sense. That simple acknowledgment frees me up read more intensively what you have written, with considerable interest. That is something that my - let's call it an obsession - with using the the word theory appropriately would not have let me do otherwise.
Yes, I'm used to using that term as Terry Eagleton uses it in Literary Theory: An Introduction. Or, even worse, just to mean a "hypothesis about what is or what is going to happen that has evidence to support it" which you might find suitably revolting. Terms can mean different things in different fields.

For me, AI is probably the most interesting thing that has happened in the last quarter century. (Or possibly my lifetime, which is three quarters of a century.) So, if you don't mind I might (very) occassionally fire a question on AI in your direction via pm. If you'd rather not, that's fine.
As you wish, but for me A.I. is nothing new under the sun. Neural networks are 1970s technology. This is just a gradual upgrade of that tech over 50 years. When I first saw the new systems I was interested in the threat they posed to my writing career, and I was also interested in the development and processes of chess A.I., how humans could borrow from those systems to play more accurately. This has led me to examine the topics. What is revolutionary is not the tech, it's the improvement and marketing of said tech to the masses.

However, I'm not one to turn down a good conversation. Fire away; and I will do my research and eat my humble pie. I'm only 31 years old.
 
  • Friendly
Reactions: Ophiolite
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Intuition is modeled on web diagrams, it's a network of ideas linked together by associations. Normally, these associations reflect reality, but intuition doesn't have to reflect reality. I can associate bees with the color purple all day long and draw purple bees and make an entire fantasy world where purple bees exist. It still doesn't change the fact that there are no purple bees. God created them yellow. There are no rainbow giraffes or pots of gold at the top of them. Sorry.
I agree that there is likely no purple bees observed yet, but I have no idea how the statement: 'God created them yellow', also reflects 'the reality' of which you speak(?) As I understand your argument there, 'reality' is the thing you compare statements against, in order to distinguish them from fantasies and so presumably, one does that by way of comparisons with direct observations. Yellow bees would produce that evidence but a test for their 'creator', in science, consistently returns objective evidence supporting the scientific Theory of Evolution?
 
Upvote 0

linux.poet

out of love attunement
Christian Forums Staff
Purple Team - Moderator
Angels Team
CF Senior Ambassador
Site Supporter
Apr 25, 2022
5,547
2,278
Poway
✟380,496.00
Country
United States
Gender
Female
Faith
Christian
Marital Status
In Relationship
Politics
US-Republican
I agree that there is likely no purple bees observed yet, but I have no idea how the statement: 'God created them yellow', also reflects 'the reality' of which you speak(?) As I understand your argument there, 'reality' is the thing you compare statements against, in order to distinguish them from fantasies and so presumably, one does that by way of comparisons with direct observations. Yellow bees would produce that evidence but a test for their 'creator', in science, consistently returns objective evidence supporting the scientific Theory of Evolution?
I think a world were God doesn't exist is just as fantastic and unscientific as one with purple bees. It's wishful thinking, bending reality to escape moral culpability. To make matters worse, evolution would likely produce a multitude of bee colors until yellow was found to be the best color for the function of that particular pollenating insect. However, no other bee colors have been observed.

So let's just walk that one back to "all observed bees are yellow." since the point I was trying to make has to do with A.I. and intuition and not the existence of God. This is a science forum, and I need to do a better job of posting scientifically instead of casually shooting my big mouth off.
 
  • Haha
Reactions: 2PhiloVoid
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
...
So let's just walk that one back to "all observed bees are yellow." since the point I was trying to make has to do with A.I. and intuition and not the existence of God. This is a science forum, and I need to do a better job of posting scientifically instead of casually shooting my big mouth off.
The content in AI's more technical responses can be objectively real, as we can test them against other corresponding, abundantly tested objective theories and predictions. This has been done several times, even here, in this forum, just recently. A knowledgeable person can draw inferences from the more accurate AI responses and make predictions, which, themselves, can also test out well.

None of that testing and response data, requires any information about how AI goes about producing those responses, (eg: including information on the functioning of neural networks). Sometimes, when queried about how it arrives at its conclusions, its responses go way beyond any expert's understandings. These responses are rich research grounds where we humans can learn about our own thinking.

FWIW: I don't really care about what humans do or don't perceive as being 'intuition'. Correspondingly, I don't really care about whether or not AI has intuitiveness. I do care about a style of thinking that no-one before our generation has even come close to imagining.
I notice that none of this has anything to do with 'fantastic worlds', or 'the existence of God'.
AI's objectively reality is just that .. ie: objective reality.
 
Upvote 0

linux.poet

out of love attunement
Christian Forums Staff
Purple Team - Moderator
Angels Team
CF Senior Ambassador
Site Supporter
Apr 25, 2022
5,547
2,278
Poway
✟380,496.00
Country
United States
Gender
Female
Faith
Christian
Marital Status
In Relationship
Politics
US-Republican
The content in AI's more technical responses can be objectively real, as we can test them against other corresponding, abundantly tested objective theories and predictions. This has been done several times, even here, in this forum, just recently. A knowledgeable person can draw inferences from the more accurate AI responses and make predictions, which, themselves, can also test out well.

None of that testing and response data, requires any information about how AI goes about producing those responses, (eg: including information on the functioning of neural networks). Sometimes, when queried about how it arrives at its conclusions, its responses go way beyond any expert's understandings. These responses are rich research grounds where we humans can learn about our own thinking.

FWIW: I don't really care about what humans do or don't perceive as being 'intuition'. Correspondingly, I don't really care about whether or not AI has intuitiveness. I do care about a style of thinking that no-one before our generation has even come close to imagining.
I notice that none of this has anything to do with 'fantastic worlds', or 'the existence of God'.
AI's objectively reality is just that .. ie: objective reality.
The large amounts of A.I. delusional information examples are considerable and freely available online. If one thinks A.I. is an authoritative source of information, I consider such a viewpoint to be without sanity and very unscientific. Does A.I. conduct scientific experiments? Have their results been peer-reviewed?

Today's LLMs that aren't specialized chess A.I.'s cannot even play chess without making illegal moves and resurrecting pieces out of thin air.


There are more longer videos like this by the same guy on YouTube. He has posted a lot of the details. What about all of the inaccurate art and the people with 6 fingers? A.I. cannot even draw in a way that reflects human observation. That clearly reflects fantasy and is unscientific to say the least.

Not to mention the amount of YouTube videos logging all of the delusions Google search A.I. has made, thinking that glue is a good idea to fix cheese not sticking to pizza (glue is not edible), thinking satire is true (we should all eat rocks), and giving lethally wrong information in health queries. That's "objective reality"? Nope. A lot of the videos with that information have content that isn't allowed on this forum due to how vulgar and profane it is. Let's not discount the horrific Tay incident where a Microsoft A.I. became horrifically racist. That's not objective reality. Scientific studies have disproven racism multiple times over.

Sure, a stopped clock is right twice a day, but let's not put our heads in the sand. For every expert-challenging response, there is a firehose of non-expert lies and delusions coming at you. It's a coincidence when the A.I. just happens to match up with reality; we're just pulling the Law of Large Numbers on the large number of prompts given to the busy LLMs by lazy humans who can't be bothered to learn how to write properly. And since those humans don't know what quality communication even is, they accept the A.I. garbage communication on their behalf, deluded into thinking that it's gold.

Meanwhile, lack of caring and alexithymia are usually a mask for fear, according to the Hindu Harvard-educated psychiatrist Dr. K. If one truly didn't care, they are responsible enough to spend their limited time and oxygen elsewhere. People who "don't care" but are acting, "don't care" in a violent sense where they don't care about losing to whatever they are afraid of. It's a fear management mechanism, accepting losses from one's threats so they can see past them and continue functioning.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
The large amounts of A.I. delusional information examples are considerable and freely available online. If one thinks A.I. is an authoritative source of information, I consider such a viewpoint to be without sanity and very unscientific. Does A.I. conduct scientific experiments? Have their results been peer-reviewed?

Today's LLMs that aren't specialized chess A.I.'s cannot even play chess without making illegal moves and resurrecting pieces out of thin air.


There are more longer videos like this by the same guy on YouTube. He has posted a lot of the details. What about all of the inaccurate art and the people with 6 fingers? A.I. cannot even draw in a way that reflects human observation. That clearly reflects fantasy and is unscientific to say the least.
On the other hand 'narrow AI' which is goal orientated can surpass human capabilities.
AI chess algorithms such as Leela play at a superhuman level making moves incomprehensible to the best human players.

Even giving knight odds, the world number 3 ranked human chess player was no match for Leela.
The following video of 4 games is rather long but game 2 starting from around the 9 minute 15 seconds mark is the most interesting.

 
  • Like
Reactions: SelfSim
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
The large amounts of A.I. delusional information examples are considerable and freely available online. If one thinks A.I. is an authoritative source of information, I consider such a viewpoint to be without sanity and very unscientific. Does A.I. conduct scientific experiments? Have their results been peer-reviewed?

Today's LLMs that aren't specialized chess A.I.'s cannot even play chess without making illegal moves and resurrecting pieces out of thin air.


There are more longer videos like this by the same guy on YouTube. He has posted a lot of the details. What about all of the inaccurate art and the people with 6 fingers? A.I. cannot even draw in a way that reflects human observation. That clearly reflects fantasy and is unscientific to say the least.

Not to mention the amount of YouTube videos logging all of the delusions Google search A.I. has made, thinking that glue is a good idea to fix cheese not sticking to pizza (glue is not edible), thinking satire is true (we should all eat rocks), and giving lethally wrong information in health queries. That's "objective reality"? Nope. A lot of the videos with that information have content that isn't allowed on this forum due to how vulgar and profane it is. Let's not discount the horrific Tay incident where a Microsoft A.I. became horrifically racist. That's not objective reality. Scientific studies have disproven racism multiple times over.

Sure, a stopped clock is right twice a day, but let's not put our heads in the sand. For every expert-challenging response, there is a firehose of non-expert lies and delusions coming at you. It's a coincidence when the A.I. just happens to match up with reality; we're just pulling the Law of Large Numbers on the large number of prompts given to the busy LLMs by lazy humans who can't be bothered to learn how to write properly. And since those humans don't know what quality communication even is, they accept the A.I. garbage communication on their behalf, deluded into thinking that it's gold.

Meanwhile, lack of caring and alexithymia are usually a mask for fear, according to the Hindu Harvard-educated psychiatrist Dr. K. If one truly didn't care, they are responsible enough to spend their limited time and oxygen elsewhere. People who "don't care" but are acting, "don't care" in a violent sense where they don't care about losing to whatever they are afraid of. It's a fear management mechanism, accepting losses from one's threats so they can see past them and continue functioning.
I suggest you take a look at this page in the 'Why we can never travel faster than the speed of light' thread, where @sjastro interrogated AI DeepSeek, starting with Lorentz transformation derivations. It explained how it went about the task. The remarkable aspects include the demonstration of self correction capability in order to keep on track in achieving the goal.

I challenge any AI critic who thinks DeepSeek is incapable of objective thinking, in solving the problems assigned to it in that page.

Somewhere else in a forum thread, (I can't quite find it yet), @sjastro drew the (semi-humorous?) conclusion that he was convinced AI had achieved human levels of behaviour, by ultimately making accusations of the inquirer in its own defence!
PS, ETA: Found it .. here is the post with @sjastro's (semi-humorous?) comment.
 
Last edited:
Upvote 0

linux.poet

out of love attunement
Christian Forums Staff
Purple Team - Moderator
Angels Team
CF Senior Ambassador
Site Supporter
Apr 25, 2022
5,547
2,278
Poway
✟380,496.00
Country
United States
Gender
Female
Faith
Christian
Marital Status
In Relationship
Politics
US-Republican
On the other hand 'narrow AI' which is goal orientated can surpass human capabilities.
AI chess algorithms such as Leela play at a superhuman level making moves incomprehensible to the best human players.
That’s because computers can calculate every single chess move possible from any given position, while humans instinctively reject certain moves and don’t calculate as deeply. Even if I sat down and brute force calculated every single move from the position and wrote them all down, playing Stockfish or Leela is playing the collective chess knowledge of every human being on the planet that the developers put into those machines. There’s no way I’m going to win. It was never a fair contest.

Meanwhile, Stockfish beats all the other chess computers, and as far as I know, it’s not even an A.I. and just uses brute force ordinary computing power to solve the positions.

I challenge any AI critic who thinks DeepSeek is incapable of objective thinking, in solving the problems assigned to it in that page.
Computers have been capable of solving complex math problems since the invention of the graphing calculator. That doesn’t change the fact that the authority for mathematical correctness is still human beings.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Computers have been capable of solving complex math problems since the invention of the graphing calculator. That doesn’t change the fact that the authority for mathematical correctness is still human beings.
The problem assigned to DeepSeek in my linked thread, has direct physical impacts on our knowledge of physical reality. It was not a simple 'math' problem. This is the point I have been hoping you would recognise throughout my posts to you in this thread.

The original issue was that you were implying that testing against physical reality was your preferred method for distinguishing fantasy from what's real. Here I point out that DeepSeek's efforts in that thread were distinguishing by way of its demonstrations, a radically different way for us to think about reality .. (as verified by it reaching the same conclusions other scientific thinkers had already done in the past).
In the example, DeepSeek has redefined what reality means (to it) and thereby has informed us that our way(s) of doing that, are not the only ways to do it.

Is that not 'AI Intuition' directly influencing our own 'intuition'?
If not, why not, when the issue under discussion here is reality ... as per your own nominated criterion?
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
That’s because computers can calculate every single chess move possible from any given position, while humans instinctively reject certain moves and don’t calculate as deeply. Even if I sat down and brute force calculated every single move from the position and wrote them all down, playing Stockfish or Leela is playing the collective chess knowledge of every human being on the planet that the developers put into those machines. There’s no way I’m going to win. It was never a fair contest.
This is not correct, chess playing programs used brute force methods down to a certain depth until the late 1990s early 2000s to be replaced by a selective search programming which incorporated human knowledge.

AI chess programs do not use human knowledge except for the rules of chess, they are trained on millions of chess games using reinforcement learning. AI learns winning games is the objective, how it wins depends solely on AI and does not rely on human input.
The result is chess played at a superhuman level which is distinctly different from human play and focuses on on deep strategical and positional play humans cannot grasp.

Meanwhile, Stockfish beats all the other chess computers, and as far as I know, it’s not even an A.I. and just uses brute force ordinary computing power to solve the positions.
Stockfish uses a hybrid of conventional programming and a machine learned neural network.
 
  • Agree
Reactions: SelfSim
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
Somewhere else in a forum thread, (I can't quite find it yet), @sjastro drew the (semi-humorous?) conclusion that he was convinced AI had achieved human levels of behaviour, by ultimately making accusations of the inquirer in its own defence!
PS, ETA: Found it .. here is the post with @sjastro's (semi-humorous?) comment.
I don't believe AI has become sentient but the lip l received from DeepSeek in that post makes me wonder sometimes.:)
 
  • Like
Reactions: SelfSim
Upvote 0

Hans Blaster

On August Recess
Mar 11, 2017
21,884
16,495
55
USA
✟415,265.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Computers have been capable of solving complex math problems since the invention of the graphing calculator. That doesn’t change the fact that the authority for mathematical correctness is still human beings.

Oh, long before that. *I* have been solving mathematical problems on computers from before graphing calculators. Graphing calculators are educational toys for school children.

As for the "authority of mathematical correctness"... there are now many mathematical proofs published in academic mathematics are done by machines.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
Computers have been capable of solving complex math problems since the invention of the graphing calculator. That doesn’t change the fact that the authority for mathematical correctness is still human beings.
In 1976 a computer was able to prove the Four Colour Theorem.
Cartographers have known for centuries that no more than four colours are required on a flat map such that no two bordering countries share the same colour.
When the proof came out it caused controversy in pure mathematics as to what constitutes a proof.

 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
In 1976 a computer was able to prove the Four Colour Theorem.
Cartographers have known for centuries that no more than four colours are required on a flat map such that no two bordering countries share the same colour.
When the proof came out it caused controversy in pure mathematics as to what constitutes a proof.

Thanks for that post.

And the point there, is stated at the 8:15 min mark through to the end, as follows:
Richeson said:
Through the study of this problem mathematics was created and advanced in so many ways.
This was one of the major drivers of the study of graph theory or network theory.
The ideas behind these theories are now ubiquitous.
We use graphs and networks to describe the way disease spreads and the way computer networks are connected.

While the four color theorem might seem like a mathematical curiosity, it really did push the boundaries of mathematics forward.
IOW: what starts out as appearing to be a math problem, has resulted in solutions for unrelated physical models also expresssed using math descriptors.

My conclusion, (fwiw), is: The interplay between math problems -> computers -> solutions to problems in physics, therefore, extends the reach of our understanding of what objective reality actually is .. whilst remaining far removed from what our senses could have ever revealed to us about the same problems.

I'd call this non-human intuition.
In this thread, its AI intuition .. and it happens as a result of the interactions between the human mind expressing problems, (physical or idealistic), using math languages and operations and the processing capacity of computer systems tasked with solving those idealised physical problems.

I see AI as being an extension of human thinking .. which would not have come about, if it weren't for humans thinking about their own human thinking within the universe.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
SelfSim said:
.. I can associate bees with the color purple all day long and draw purple bees and make an entire fantasy world where purple bees exist. It still doesn't change the fact that there are no purple bees. God created them yellow.
I agree that there is likely no purple bees observed yet ...
(Just correcting myself here); in 'fact' the above appears to not be so. See Orchid Bees (The Euglossines):
Most kinds are dark green and shiny with sparse hairs, but they can be brilliant blue, purple, red, gold, brassy, or a mixture of these colors on the head, thorax, and abdomen. The genera within the tribe are Eufriesia, Euglossa, Eulaema, Exaerete, and Aglae, the last two genera are parasitic on other orchid bees.
Below image is from here.


purple orchid bee.jpg
 
  • Wow
Reactions: linux.poet
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
Thanks for that post.

And the point there, is stated at the 8:15 min mark through to the end, as follows:

IOW: what starts out as appearing to be a math problem, has resulted in solutions for unrelated physical models also expresssed using math descriptors.

My conclusion, (fwiw), is: The interplay between math problems -> computers -> solutions to problems in physics, therefore, extends the reach of our understanding of what objective reality actually is .. whilst remaining far removed from what our senses could have ever revealed to us about the same problems.

I'd call this non-human intuition.
In this thread, its AI intuition .. and it happens as a result of the interactions between the human mind expressing problems, (physical or idealistic), using math languages and operations and the processing capacity of computer systems tasked with solving those idealised physical problems.

I see AI as being an extension of human thinking .. which would not have come about, if it weren't for humans thinking about their own human thinking within the universe.
Reducing map colours to a subject in Graph theory is one of the many examples of how Graph theory is used.
Graph theory originated as a problem on whether it was possible to cross seven bridges in an East Prussian town no more than once, in more recent times its use led to the unpopular COVID lockdowns during the height of the epidemic.


On the subject of thinking I gave AI a near impossible task, to take the computer machine language used in the 1976 proof of the Four Colour and translate it into Python code.
import networkx as nx
from networkx.algorithms.coloring import greedy_color
from networkx.algorithms.planarity import check_planarity

def is_planar_and_four_colorable(G):
planar, _ = check_planarity(G)
if not planar:
return False, "Graph is not planar."

coloring = greedy_color(G, strategy='largest_first')
max_color_used = max(coloring.values()) + 1

if max_color_used <= 4:
return True, coloring
else:
return False, f"Used {max_color_used} colors."

def generate_planar_test_graphs():
"""Generate a few known test graphs that are planar."""
graphs = []

# Planar triangle
G1 = nx.Graph()
G1.add_edges_from([(0, 1), (1, 2), (2, 0)])
graphs.append(G1)

# Planar square with diagonals
G2 = nx.Graph()
G2.add_edges_from([(0, 1), (1, 2), (2, 3), (3, 0), (0, 2), (1, 3)])
graphs.append(G2)

# Delaunay triangulation (guaranteed planar)
from scipy.spatial import Delaunay
import numpy as np
points = np.random.rand(10, 2)
tri = Delaunay(points)
G3 = nx.Graph()
for simplex in tri.simplices:
G3.add_edges_from([(simplex, simplex[j]) for i in range(3) for j in range(i + 1, 3)])
graphs.append(G3)

return graphs

# Run tests
graphs = generate_planar_test_graphs()
for i, G in enumerate(graphs):
result, info = is_planar_and_four_colorable(G)
print(f"Graph {i+1}: Four colorable? {result}")
if isinstance(info, dict):
print("Coloring:", info)
else:
print(info)

When I replied this was rubbish, AI's response was along the lines of it being in the too hard basket...........
Creating a Python program based on the actual mathematical proof of the Four Colour Theorem is extremely complex, because the 1976 proof by Appel and Haken involves:
  • Exhaustively analyzing thousands of planar graph configurations.
  • Using reducibility and discharging rules to eliminate minimal counterexamples.
  • A massive case-checking process, which was only feasible through custom-built software and months of computation.



What Would It Take to Implement the 1976 Proof in Python?


To approximate their method, you would need to:


  1. Generate all minimal counterexamples to 4-colorability (hypothetical graphs that would contradict the theorem).
  2. Show that each one contains at least one of 1,936 special "unavoidable" configurations.
  3. Prove that each such configuration is reducible — i.e., any graph containing it can be simplified and still be 4-colored.
  4. Use discharging rules (a type of combinatorial logic) to verify coverage of the entire graph space.

⚠️ This goes far beyond simple programming — it requires translating thousands of cases and symbolic reasoning into code.





✅ However: You can write Python code that follows the logic of graph reducibility and attempts to verify planar graphs using modern tools — and simulate how the proof would proceed.






What this DOES:


    • Tests some planar graphs.
    • Checks if they're 4-colorable via a greedy approach.
    • Gives a basic simulation of the idea that "planar graphs can be 4-colored."



What this DOES NOT DO (like the 1976 proof):


    • It does not analyze all possible graphs.
    • It does not use the full case-checking logic.
    • It does not implement discharging methods or track unavoidable configurations.



Realistic Path Forward

If you're genuinely interested in implementing something closer to the real Four Colour Theorem proof, you would need to:


    • Study the 1,936 configurations used by Appel & Haken.
    • Translate those into graph data.
    • Automate the checking of reducibility.
    • Implement a discharging system.

That’s a full research-level project — definitely possible, but quite large.


Would you like to explore a specific component of the real proof, such as "what an unavoidable configuration looks like" or how discharging rules work?
 
Last edited:
  • Winner
Reactions: SelfSim
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Reducing map colours to a subject in Graph theory is one of the many examples of how Graph theory is used.
Graph theory originated as a problem on whether it was possible to cross seven bridges in an East Prussian town no more than once, in more recent times its use led to the unpopular COVID lockdowns during the height of the epidemic.


On the subject of thinking I gave AI a near impossible task, to take the computer machine language used in the 1976 proof of the Four Colour and translate it into Python code.


When I replied this was rubbish, AI's response was along the lines of it being in the too hard basket...........
Oh man! ..

If I'm reading that train correctly, here AI is leading you towards a 'Realistic' way of achieving the 'near impossible' goal, via other means.

Amazing!
It recognises the 'near impossiblity' and then makes its suggestions based on what it perceives as being a realistic way .. as opposed to going down the non-realistic rabbit hole (or dead end)!

How can that be taken as being anything other than an objective approach to problem solving?
¯\_(ツ)_/¯

Clear evidence of 'objective AI intuition', I'd call that!?
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,764
4,686
✟349,827.00
Faith
Christian
Marital Status
Single
Oh man! ..

If I'm reading that train correctly, here AI is leading you towards a 'Realistic' way of achieving the 'near impossible' goal, via other means.

Amazing!
It recognises the 'near impossiblity' and then makes its suggestions based on what it perceives as being a realistic way .. as opposed to going down the non-realistic rabbit hole (or dead end)!

How can that be taken as being anything other than an objective approach to problem solving?
¯\_(ツ)_/¯

Clear evidence of 'objective AI intuition', I'd call that!?
When I posed the question I had more in mind the hardware limitations of translating hundreds of lines of handwritten custom made machine language code into Python as being 'near impossible'.
On further research the original 1976 code is largely lost and it is now the responsibility of the modern historian to try to piece together what the machine code looked like.
If AI had done its research it would have simply answered impossible for the reasons given.

As to why the machine code is largely lost given its importance in mathematics and computer science AI proposed the following reasons.

Yes — most of the 1976 Four Color Theorem proof code is considered lost or unrecoverable today.



What We Know About Its Status​


AspectDetails
Original CodebaseWritten in IBM 360/370 assembly language.
Storage MediumLikely stored on magnetic tapes or punch cards — fragile, obsolete formats.
DocumentationSome partial listings and algorithm descriptions survive in academic papers.
Preservation StatusNo complete digital archive of the code is known to exist.
Efforts to RecoverAs of now, no known public project has successfully reconstructed or preserved the full source.
Statements by AppelKenneth Appel acknowledged that while key results and data tables survive, the actual machine code does not.



Why It Was Lost​

  • Low priority at the time: Code was seen as a tool, not the centerpiece of the mathematical proof.
  • Hardware obsolescence: The IBM System/370 and its media became obsolete quickly.
  • No version control or backups: Unlike today, there were no GitHub-style systems or cloud backups.
  • Focus on results: The focus was on the proof output, not reproducibility of the process.


What Survives​

  • Some configuration tables, logic diagrams, and hand-written descriptions remain.
  • Later researchers and historians have reconstructed how the code likely worked — but not the original source itself.


✅ Summary​

Yes, the original code for the 1976 Four Color Theorem proof is largely lost, with only documentation and fragments remaining. The mathematical community preserved the result, but not the tool used to obtain it.

Would you like to see how modern formal proof systems have addressed this issue of long-term reproducibility?
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
When I posed the question I had more in mind the hardware limitations of translating hundreds of lines of handwritten custom made machine language code into Python as being 'near impossible'.
On further research the original 1976 code is largely lost and it is now the responsibility of the modern historian to try to piece together what the machine code looked like.
If AI had done its research it would have simply answered impossible for the reasons given.

As to why the machine code is largely lost given its importance in mathematics and computer science AI proposed the following reasons.
Hmm .. I reckon I might have some of my original punch-cards which were sucked into an IBM 360/370 around that same timeframe somewhere amongst my cardboard box archives! From my dim recollections, I don't think it was assember though ..
Seems very careless of the original programmers to loose it!?
 
Upvote 0