Hmm .. interesting.
The principle of improving the resolution in a fractal image algorithm, is fundamentally different from altering measured data in order to draw inferences on physical causality, eh(?)
These are two are totally different concepts.
Here is how fractal image algorithm works.
There are functions of complex variables of the form f(z).
One particular function is f(z) = z² +c where z and c are complex numbers.
Lets suppose we input z = 0 and let c vary then the output is f(0) = c.
f(0) now becomes our new variable which is inputted back into the function which is now of the form f(f(0)).
This is known as an iteration which can be repeated by now making f(f(0)) the new input and the function takes the form f(f(f(0)).
The process can be repeated to any number of iterations.
At each iteration the modulus of |f(0)|, |f(f(0))|, |f(f(f(0))|…… can be calculated.
The condition for a Mandelbrot set is that if we perform a large number of iterations the modulus for each term |f(0)|, |f(f(0))|, |f(f(f(0))|…… is always less than or equal to 2 which depends on c.
If the condition is met then c is an element in the Mandelbrot set.
In my BASIC program I set the limit to 500 iterations while GPT-4o set to at 256 in Python.
Generally the larger the number of iterations the better the resolution although GPT-4o has used fewer iterations with much more detail as it is a far better programmer than I am which is not saying much.
Reading around the web, there are papers probing the fundamental gap between identifying 'signatures' of a given physical process, and the more challenging task of understanding their possible causes, (aka: mechanistic or human introduced ones).
Posing questions, (hypotheses), such as what happens if we do X to a drilling/boring system (like wobbling the boring tool, or splitting the tip of it), are interventions based on a causal understanding of boring holes (eg: in granite). That is, in the sense of: 'that if X is executed, then the relevant process becomes modified and thence, so too, do the signatures .. and how?'.
I guess in this thread, you're testing how GPT bridges this gap and comparing it with the different gap understanding Petrie and Dunn had.
What a productively useful exercise! Great thread!
What this thread has revealed is that AI has filled in the gaps on the lack of information of Petrie's No. 7 sample.
We now know the pitch is highly variable and the core is tapered not cylindrical.
This leads to two hypotheses:
(1) The core was drilled out using modern day equipment such as diamond tipped drills.
(2) The core was drilled out using a copper tube and loose abrasives in the form of a slurry using manual labour to supply the RPMs.
Point (1) is dismissed as modern day equipment produces cylindrical cores with very little variation in the pitch.
Point (2) explains the observations made on Petrie's sample where pitch variation is caused by wobble, changes in RPM and the use of an abrasive slurry while the tapered shape is caused by copper tool wear.
Most importantly it refutes Dunn's idea of some super technologically advanced equipment the Egyptians used for drilling granite as his hypothesis depends on the pitch being essentially constant.
Essentially AI simply verified point (2) but it was the investigation of scanning of 'pre dynastic vases' where it made some interesting comments.
Firstly it highlighted the pitfalls in scanning vases which have light and dark coloured regions and secondly and more important it queried how a scanned vase which is supposedly highly symmetrical has an uneven number of scanned points for the left and right hand lug handles.
This could support a comment you made the vase was assumed to be highly symmetrical and scanning was done to confirm the result where scanned points which did not conform were omitted.