• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Using AI to further debunk ancient Egyptians used technologies to drill granite far beyond the current level.

Hans Blaster

Raised by bees
Mar 11, 2017
21,954
16,542
55
USA
✟416,531.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
In a nutshell GPT-4o failed to understand the nature of my Python code.

Shocking.
(1) It recognized in line 7, the individual values of z = 1 unit represents the pitch of the spiral, line 8 converts the pitch values into depth (or height) which it understood.
What threw it was line 9 where I rescaled zz by multiplying by -0.1 to not only to get the spiral in the opposite direction, but also for the graphical representation of the pitch of the spiral to equal 1 unit.
I guess you could have directly filled zz with "-0.1"...
The matplotlib package imported in line 4 automatically scales the axes and represents the pitch in the graph as 10 units which is seen in GPT-4o's correction to my code.
GPT-4o saw z and zz in my code as representing two different pitches and therefore came to the conclusion the pitch was not constant.
Silly AI, you only plotted [x,y,zz].
(2) GPT-4o thought I had made an error line 6 which gives the spiral its non cylindrical shape, this was deliberate but GPT-4o corrected.
Silly AI, it still isn't cylindrical. It's still made of line segments, just a lot more of them. (Apparently too many to notice in the outputted graphic.)
Perhaps I should have placed comments in my code to explain all of this which was another of GPT-4o's criticisms but being AI and world's greatest programmer I thought this would not have been necessary. :)
# Course segmented spiral to visually fool AI.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟217,840.00
Faith
Humanist
Marital Status
Private
In a nutshell GPT-4o failed to understand the nature of my Python code.

(1) It recognized in line 7, the individual values of z = 1 unit represents the pitch of the spiral, line 8 converts the pitch values into depth (or height) which it understood.
What threw it was line 9 where I rescaled zz by multiplying by -0.1 to not only to get the spiral in the opposite direction, but also for the graphical representation of the pitch of the spiral to equal 1 unit.
The matplotlib package imported in line 4 automatically scales the axes and represents the pitch in the graph as 10 units which is seen in GPT-4o's correction to my code.
GPT-4o saw z and zz in my code as representing two different pitches and therefore came to the conclusion the pitch was not constant.

(2) GPT-4o thought I had made an error line 6 which gives the spiral its non cylindrical shape, this was deliberate but GPT-4o corrected.

Perhaps I should have placed comments in my code to explain all of this which was another of GPT-4o's criticisms but being AI and world's greatest programmer I thought this would not have been necessary. :)
Thanks for the explanation there. (Python is generations distant from the native languages I grew up with).
I'm amazed that AI could associate (and 'correct') individual lines of code with the main query topic though .. very impressive.

I guess if I was pressed, I'd say the purpose of the exercise was to expose its depth of understanding of the objective definitions of 'pitch' and thence 'spiral', and track how that understanding impacts in its conclusions in an objective test scenario(?)

If so, then its been a very cool test. I'm learning a lot about AI from this.
(My somewhat annoying Star Trek analogies also hopefully guide informed opinions, where those opinions now have demonstrable test evidence as you've developed herein).
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
Shocking.

I guess you could have directly filled zz with "-0.1"...
This would produce the the following plot.

Figure_4.png
Silly AI, you only plotted [x,y,zz].
If GPT-4o had replaced its command zz = z_step * np.arange(1, n_points + 1) with zz = -0.1*z_step * np.arange(1, n_points + 1), its graph would have been correctly scaled with a clockwise turning helix.
Silly AI, it still isn't cylindrical. It's still made of line segments, just a lot more of them. (Apparently too many to notice in the outputted graphic.)

# Course segmented spiral to visually fool AI.
Recall initially GPT-4o's graph was to produce a 3D representation of the pitch for Petrie's No. 7 sample based on the information gathered from the image.
It modelled the core insert as a cylindrical shape which brings up another point, GPT-4o found the Petrie sample to be tapered from the image.

Tapered.png

This is further evidence the Petrie sample was produced without the use of high technology diamond tipped drills as the tapered shape can be explained due to copper tool wear by the corundum abrasive and granite during the drilling process.

 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
Thanks for the explanation there. (Python is generations distant from the native languages I grew up with).
I'm amazed that AI could associate (and 'correct') individual lines of code with the main query topic though .. very impressive.
I gave GPT-4o the exercise of translating a BASIC program of my mine of the Mandelbrot set into Python.

translation.png

What's remarkable GPT-4o knows very little about BASIC but recognized a programming structure allowing it to translate BASIC into Python.

translation_sets.png
I guess if I was pressed, I'd say the purpose of the exercise was to expose its depth of understanding of the objective definitions of 'pitch' and thence 'spiral', and track how that understanding impacts in its conclusions in an objective test scenario(?)

If so, then its been a very cool test. I'm learning a lot about AI from this.
(My somewhat annoying Star Trek analogies also hopefully guide informed opinions, where those opinions now have demonstrable test evidence as you've developed herein).
For me the exercise showed that AI can be influenced by bias, it analysed my Python code as 'flawed' and concluded the issue of pitch was a problem with my coding.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟217,840.00
Faith
Humanist
Marital Status
Private
I gave GPT-4o the exercise of translating a BASIC program of my mine of the Mandelbrot set into Python.

What's remarkable GPT-4o knows very little about BASIC but recognized a programming structure allowing it to translate BASIC into Python.
How quickly does it come up with these revised programs etc?
Is it like real time during the conversations with you, or is it a kind of batch type processing delay while it thinks about it?

For me the exercise showed that AI can be influenced by bias, it analysed my Python code as 'flawed' and concluded the issue of pitch was a problem with my coding.
.. but it was wrong .. (because of the assumption that you were striving for a proper pitch)!
Fascinating!
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
How quickly does it come up with these revised programs etc?
Is it like real time during the conversations with you, or is it a kind of batch type processing delay while it thinks about it?

.. but it was wrong .. (because of the assumption that you were striving for a proper pitch)!
Fascinating!
All I did was cut and paste my BASIC program into the GT-4o's dialog box which it translates into a Python version in a few seconds.
It kept to the logical structure of the BASIC program but when I gave GPT-4o free rein to code the Mandelbrot set it came up with a vastly superior Python code which shows the set in much greater detail.

Mand_Python.png

Mand_GPT_o4_plot.png
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,023
1,746
Brisbane Qld Australia
✟321,764.00
Gender
Male
Faith
Christian
Marital Status
Private
The amazing thing about the Mandlebrot set is that its a mathmatical set and yet it reflects patters we see in nature such as fractals and the Golden ratio even down to the micro world ad infinitum. Interestingly we see similar patterns in these precision works like the vases displaying the golden ratio and Phi.
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟217,840.00
Faith
Humanist
Marital Status
Private
All I did was cut and paste my BASIC program into the GT-4o's dialog box which it translates into a Python version in a few seconds.
It kept to the logical structure of the BASIC program but when I gave GPT-4o free rein to code the Mandelbrot set it came up with a vastly superior Python code which shows the set in much greater detail.

Hmm .. interesting.

The principle of improving the resolution in a fractal image algorithm, is fundamentally different from altering measured data in order to draw inferences on physical causality, eh(?)

Reading around the web, there are papers probing the fundamental gap between identifying 'signatures' of a given physical process, and the more challenging task of understanding their possible causes, (aka: mechanistic or human introduced ones).

Posing questions, (hypotheses), such as what happens if we do X to a drilling/boring system (like wobbling the boring tool, or splitting the tip of it), are interventions based on a causal understanding of boring holes (eg: in granite). That is, in the sense of: 'that if X is executed, then the relevant process becomes modified and thence, so too, do the signatures .. and how?'.

I guess in this thread, you're testing how GPT bridges this gap and comparing it with the different gap understanding Petrie and Dunn had.

What a productively useful exercise! Great thread!
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
The amazing thing about the Mandlebrot set is that its a mathmatical set and yet it reflects patters we see in nature such as fractals and the Golden ratio even down to the micro world ad infinitum. Interestingly we see similar patterns in these precision works like the vases displaying the golden ratio and Phi.
There is nothing unique about the Mandelbrot set except it is the simplest of the iterative functions that turns out to be a fractal.
I randomly picked two functions for GPT-4o to plot and analyse if they were fractals.

(1) z → z⁷ + c is a fractal.

Figure_6.png

(2) z → z⁷ + sin(z) + c is not a fractal.

Figure_8.png
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
Hmm .. interesting.

The principle of improving the resolution in a fractal image algorithm, is fundamentally different from altering measured data in order to draw inferences on physical causality, eh(?)
These are two are totally different concepts.

Here is how fractal image algorithm works.
There are functions of complex variables of the form f(z).
One particular function is f(z) = z² +c where z and c are complex numbers.
Lets suppose we input z = 0 and let c vary then the output is f(0) = c.
f(0) now becomes our new variable which is inputted back into the function which is now of the form f(f(0)).
This is known as an iteration which can be repeated by now making f(f(0)) the new input and the function takes the form f(f(f(0)).
The process can be repeated to any number of iterations.
At each iteration the modulus of |f(0)|, |f(f(0))|, |f(f(f(0))|…… can be calculated.

The condition for a Mandelbrot set is that if we perform a large number of iterations the modulus for each term |f(0)|, |f(f(0))|, |f(f(f(0))|…… is always less than or equal to 2 which depends on c.
If the condition is met then c is an element in the Mandelbrot set.

In my BASIC program I set the limit to 500 iterations while GPT-4o set to at 256 in Python.
Generally the larger the number of iterations the better the resolution although GPT-4o has used fewer iterations with much more detail as it is a far better programmer than I am which is not saying much. :)
Reading around the web, there are papers probing the fundamental gap between identifying 'signatures' of a given physical process, and the more challenging task of understanding their possible causes, (aka: mechanistic or human introduced ones).

Posing questions, (hypotheses), such as what happens if we do X to a drilling/boring system (like wobbling the boring tool, or splitting the tip of it), are interventions based on a causal understanding of boring holes (eg: in granite). That is, in the sense of: 'that if X is executed, then the relevant process becomes modified and thence, so too, do the signatures .. and how?'.

I guess in this thread, you're testing how GPT bridges this gap and comparing it with the different gap understanding Petrie and Dunn had.

What a productively useful exercise! Great thread!
What this thread has revealed is that AI has filled in the gaps on the lack of information of Petrie's No. 7 sample.
We now know the pitch is highly variable and the core is tapered not cylindrical.

This leads to two hypotheses:

(1) The core was drilled out using modern day equipment such as diamond tipped drills.
(2) The core was drilled out using a copper tube and loose abrasives in the form of a slurry using manual labour to supply the RPMs.

Point (1) is dismissed as modern day equipment produces cylindrical cores with very little variation in the pitch.
Point (2) explains the observations made on Petrie's sample where pitch variation is caused by wobble, changes in RPM and the use of an abrasive slurry while the tapered shape is caused by copper tool wear.

Most importantly it refutes Dunn's idea of some super technologically advanced equipment the Egyptians used for drilling granite as his hypothesis depends on the pitch being essentially constant.

Essentially AI simply verified point (2) but it was the investigation of scanning of 'pre dynastic vases' where it made some interesting comments.
Firstly it highlighted the pitfalls in scanning vases which have light and dark coloured regions and secondly and more important it queried how a scanned vase which is supposedly highly symmetrical has an uneven number of scanned points for the left and right hand lug handles.

This could support a comment you made the vase was assumed to be highly symmetrical and scanning was done to confirm the result where scanned points which did not conform were omitted.
 
  • Like
Reactions: SelfSim
Upvote 0

Hans Blaster

Raised by bees
Mar 11, 2017
21,954
16,542
55
USA
✟416,531.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I gave GPT-4o the exercise of translating a BASIC program of my mine of the Mandelbrot set into Python.

What's remarkable GPT-4o knows very little about BASIC but recognized a programming structure allowing it to translate BASIC into Python.


For me the exercise showed that AI can be influenced by bias, it analysed my Python code as 'flawed' and concluded the issue of pitch was a problem with my coding.
The Mandelbrot set might be about the last thing I ever programmed in BASIC. (I use and older and more powerful language these days...)

As for fractals it is mostly those that are fractally wrong.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,023
1,746
Brisbane Qld Australia
✟321,764.00
Gender
Male
Faith
Christian
Marital Status
Private
There is nothing unique about the Mandelbrot set except it is the simplest of the iterative functions that turns out to be a fractal.
I randomly picked two functions for GPT-4o to plot and analyse if they were fractals.

(1) z → z⁷ + c is a fractal.


(2) z → z⁷ + sin(z) + c is not a fractal.

Except number 2 will not be within the Mandelbrot set.

You don't think its unique in that we can find ad infinitum trees, coastlines, river systems, galaxies, flowers and snowflakes within the Mandelbrot set. Not just that we find the same patterns in 5,000 year old vases well before the Mandelbrot set was discovered.

I think thats pretty unique. I don't think you will find these in number 2 or any of set except for the Mandelbrot set or its derivatives such as the Golden ration and Phi.
 
Last edited:
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
The Mandelbrot set might be about the last thing I ever programmed in BASIC. (I use and older and more powerful language these days...)

As for fractals it is mostly those that are fractally wrong.
I only programmed the Mandelbrot set in BASIC because:

(1) It was an intellectual exercise.
(2) The version of BASIC was downloadable and free.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,777
4,700
✟350,583.00
Faith
Christian
Marital Status
Single
Except number 2 will not be within the Mandelbrot set.

You don't think its unique in that we can find ad infinitum trees, coastlines, river systems, galaxies, flowers and snowflakes within the Mandelbrot set. Not just that we find the same patterns in 5,000 year old vases well before the Mandelbrot set was discovered.

I think thats pretty unique. I don't think you will find these in number 2 or any of set except for the Mandelbrot set or its derivatives such as the Golden ration and Phi.
Your post is riddled with errors.

(1) Neither is example (1) within the Mandelbrot set as it is defined by a different iterative function z → z⁷ + c even though it too is a fractal.
By definition the Mandelbrot set is defined by the iterative function z → z² + c, where z and c are complex numbers and c is an element of the set if |z| or mod(z) never becomes larger than a certain number (usually 2 in computer programs) no matter how many iterations are done.

(2) Natural fractals such as trees, coastlines, river systems, galaxies (?), flowers and snowflakes do not fall within within the Mandelbrot set nor in any type of mathematical fractal.
Mathematical fractals have perfect self similarity at any scale, have infinite resolution and precision and are 100% deterministic.

Natural fractals are effected by chaos caused by external factors such as gravity, temperature, pressure, wind etc which introduces randomness and breaks down self similarity at both large and small scales.

(3) The Golden ratio or Phi is not a derivative of the Mandelbrot set.
The Mandelbrot set belongs to the field of complex dynamics and fractals, the Golden Ratio or Phi is about number theory and geometry.
 
  • Agree
Reactions: SelfSim
Upvote 0

Hans Blaster

Raised by bees
Mar 11, 2017
21,954
16,542
55
USA
✟416,531.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I only programmed the Mandelbrot set in BASIC because:

(1) It was an intellectual exercise.
(2) The version of BASIC was downloadable and free.
I was just reminiscing of the olds days before I discovered the one true computer language.
 
  • Like
Reactions: SelfSim
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟217,840.00
Faith
Humanist
Marital Status
Private
.. Natural fractals are effected by chaos caused by external factors such as gravity, temperature, pressure, wind etc which introduces randomness and breaks down self similarity at both large and small scales.
In the case of biological instances exhibiting an apparent fractal pattern to the naked eye, (eg: a fern leaf), the cause is replication, transcription and translation of DNA molecules. So, its not 'maths embedded in the fern leaf', as argued by people who don't understand biology, (aka: numerologists).
 
  • Agree
Reactions: sjastro
Upvote 0