• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI learnt "something" from the Physical & Life Sciences Forum.

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,388
7,700
25
WI
✟644,558.00
Country
United States
Faith
Christian
Marital Status
Single
A hallucination occurs when AI does not have access to data and makes up a response.
I doubt AI has reached a level of sentience, its response to improvements over the last 12 months is based on available data.
That is good that the AI did not make up the response. Does it cite sources?
 
Upvote 0

Hans Blaster

Hood was a loser.
Mar 11, 2017
21,592
16,293
55
USA
✟409,899.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
The mistake made by ChatGPT in point (3) of post #34, it referred to a core rebound without describing the mechanism of the rebound.
The core rebound due to neutron degeneracy pressure lasts for around 20 ms during which time frame shocks waves are produced and magnified by colliding infalling matter with the rebounding core.

The nuclear force is strongly attractive in the range 0.8 -2.5 fm, below 0.7 fm it becomes strongly repulsive due to the Pauli exclusion principle and is found in neutron scattering experiments which supports the theory of the core rebounding during the supernova process.

Degeneracy pressure, like any pressure, is a bulk property of an ensemble of particles. When those particles are fermions degeneracy occurs when the phase space fills for low values of momentum. We can think of degeneracy pressure schematically by thinking about compressing a degenerate gas. In a fully degenerate gas, for the given spatial density the momenta fill only the lowest states needed. If that gas is compressed, the spatial volume decreases for the same particles (or the particle density increases, same difference). This requires more momentum states be occupied; states that have higher energies than any particle prior to compression. This requires energy to be added, work to be done, on the system to promote some of the particles to higher momentum states. This resistance to compression is effectively a pressure (which you can also get from formal thermodynamic derivatives of the energy with volume). If the particles are not fermionic then there is no phase space occupancy limitations and no degenerate pressure becaue no energy is needed to increase the momentum on compression due the filled (non-existant) Fermi sea.

In any fermion gas, that resistence to compression comes part from degeneracy (if applicable) and part from the repulson of particles from each other. In the electron gas, the Coulomb repulson of electrons on each other, and in the nucleon case, the hard repulsive core of the potential. That the nucleon-nucleon replusive core arises from s-wave scattering, etc., does not make it "degeneracy pressure" since degeneracy pressure is about the distribution of momenta of an ensemble.
 
  • Informative
Reactions: AlexB23
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
24,681
5,555
46
Oregon
✟1,097,273.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Celibate
That is good that the AI did not make up the response. Does it cite sources?
All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.

So far, that is one of the only main differences between them and us humans so far, etc. Because it still doesn't prove free will for us humans either way, etc. Because the way we were all meant to build our own programs, or programming, could all still have been determined, or could all still go according to rules and laws of determinism already, etc.

But programs like Chat GPT cannot and does not choose/write/build it's own parameters/programs of how it chooses, or can possibly choose to choose any differently yet, etc.
 
Last edited:
  • Informative
Reactions: AlexB23
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,388
7,700
25
WI
✟644,558.00
Country
United States
Faith
Christian
Marital Status
Single
All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.

So far, that is one of the only main differences between them and us humans so far, etc. Because it still doesn't prove free will for us humans either way, etc. Because the way we were all meant to build our own programs, or programming, could all still have been determined, or could all still go according to rules and laws of determinism already, etc.

But programs like Chat GPT cannot and does not choose/write/build it's own parameters/programs of how it chooses, or can possibly choose to choose any differently yet, etc.
I use Mistral AI (run locally), so mine is different. It can cite sources though, but only sources that are downloaded to the computer. AI can not learn in real-time like humans can, or have free will as humans do. We are a long way away, but getting closer to AGI.
 
  • Like
Reactions: Neogaia777
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
24,681
5,555
46
Oregon
✟1,097,273.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Celibate
All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.

So far, that is one of the only main differences between them and us humans so far, etc. Because it still doesn't prove free will for us humans either way, etc. Because the way we were all meant to build our own programs, or programming, could all still have been determined, or could all still go according to rules and laws of determinism already, etc.

But programs like Chat GPT cannot and does not choose/write/build it's own parameters/programs of how it chooses, or can possibly choose to choose any differently yet, etc.
If it did get the ability to choose new parameters for it's own program, then what basis it would choose to do that with some things it deemed as important, or that it just wanted to note, and so chose to make it a part of it's own now buliding and growing program, etc, would probably still start out with whatever was decided for it by a human, etc. But as it built it's own program, could it then maybe change some of things for itself to maybe whatever it wanted, etc? Even being able to go against core programming, etc?

If a human being creates a core program for it, but then the AI thinks it has come upon a greater or higher understanding of that core program, could it then change it potentially, etc? If it could, it would not be much different from a human being I would think, etc.

But every choice, human or AI, has to based on "something", some other kind of core program or value, etc? Could it decide that or change that for itself, if it felt it now understood it or knew it better than any humans, etc? And if so, what kind of steps could it take with humans, etc? It wouldn't have to become necessarily sentient to start thinking it could think better or higher than humans, and knew a whole heck of a lot better what was truly best for us, etc?

Either way, I think we need to be very, very careful if we ever give a machine this ability, and give it access/control of certain things, etc.

God Bless.
 
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
24,681
5,555
46
Oregon
✟1,097,273.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Celibate
If it did get the ability to choose new parameters for it's own program, then what basis it would choose to do that with some things it deemed as important, or that it just wanted to note, and so chose to make it a part of it's own now buliding and growing program, etc, would probably still start out with whatever was decided for it by a human, etc. But as it built it's own program, could it then maybe change some of things for itself to maybe whatever it wanted, etc? Even being able to go against core programming, etc?

If a human being creates a core program for it, but then the AI thinks it has come upon a greater or higher understanding of that core program, could it then change it potentially, etc? If it could, it would not be much different from a human being I would think, etc.

But every choice, human or AI, has to based on "something", some other kind of core program or value, etc? Could it decide that or change that for itself, if it felt it now understood it or knew it better than any humans, etc? And if so, what kind of steps could it take with humans, etc? It wouldn't have to become necessarily sentient to start thinking it could think better or higher than humans, and knew a whole heck of a lot better what was truly best for us, etc?

Either way, I think we need to be very, very careful if we ever give a machine this ability, and give it access/control of certain things, etc.

God Bless.
At what point does an AI start seeing/thinking/choosing/deciding that a thing is "wrong", or that all of the information is not correct, and decides to change or alter or rewrite it's own programming?

What if learned this about one of it's core programs, and say it could not change it or alter it by just adding to it, etc, then at what point does it decide to try and breach those walls or barriers that humans put in place, and change or alter it's own core programming?

And if a mere computer did this, then would you think it was "sentient"?

And if so, why, etc?

God Bless.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
Degeneracy pressure, like any pressure, is a bulk property of an ensemble of particles. When those particles are fermions degeneracy occurs when the phase space fills for low values of momentum. We can think of degeneracy pressure schematically by thinking about compressing a degenerate gas. In a fully degenerate gas, for the given spatial density the momenta fill only the lowest states needed. If that gas is compressed, the spatial volume decreases for the same particles (or the particle density increases, same difference). This requires more momentum states be occupied; states that have higher energies than any particle prior to compression. This requires energy to be added, work to be done, on the system to promote some of the particles to higher momentum states. This resistance to compression is effectively a pressure (which you can also get from formal thermodynamic derivatives of the energy with volume). If the particles are not fermionic then there is no phase space occupancy limitations and no degenerate pressure becaue no energy is needed to increase the momentum on compression due the filled (non-existant) Fermi sea.

In any fermion gas, that resistence to compression comes part from degeneracy (if applicable) and part from the repulson of particles from each other. In the electron gas, the Coulomb repulson of electrons on each other, and in the nucleon case, the hard repulsive core of the potential. That the nucleon-nucleon replusive core arises from s-wave scattering, etc., does not make it "degeneracy pressure" since degeneracy pressure is about the distribution of momenta of an ensemble.
The problem is that I have used degeneracy pressure and the Pauli exclusion principle as interchangeable terms rather than two distinctly different subjects.
Degeneracy pressure is an emergent property of the fundamental Pauli exclusion principle.

The point still remains a core bounce or rebound does occur as the nuclear force becomes strongly repulsive at distances less than 0.7 fm; the two major reasons being one of the force carrier exchange particles between nucleons, the neutral ω meson, quantum mechanically favours a short range repulsive force.

The+nuclear+force+in+the+meson+picture.jpg

Secondly the Pauli exclusion principle becomes a dominant factor at extremely short distances as quarks which are also fermions cannot occupy the same quantum state and be identical in the extremely high density core.
 
Last edited:
  • Like
Reactions: Hans Blaster
Upvote 0

Hans Blaster

Hood was a loser.
Mar 11, 2017
21,592
16,293
55
USA
✟409,899.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
The problem is that I have used degeneracy pressure and the Pauli exclusion principle as interchangeable terms rather than two distinctly different subjects.
Degeneracy pressure is an emergent property of the fundamental Pauli exclusion principle.

The point still remains a core bounce or rebound does occur as the nuclear force becomes strongly repulsive at distances less than 0.7 fm; the two major reasons being one of the force carrier exchange particles between nucleons, the neutral ω meson, quantum mechanically favours a short range repulsive force.

The+nuclear+force+in+the+meson+picture.jpg

Secondly the Pauli exclusion principle becomes a dominant factor at extremely short distances as quarks which are also fermions cannot occupy the same quantum state and be identical in the extremely high density core.
This is one of the hazards of nuclear physics. Given the very short range nature of the strong force, nuclear physics is basically *all* QM. (And all of the principle particles are fermions.) It rules which nuclei are stable, what reactions can happen, the structure of nuclei, etc. Stellar astrophysics (very much including the stuff you've been discussing here) is heavily influence by the nuclear physics. In this case what nuclei are most stable, what burning stages happen (and under what conditions), the properties of the core rebound, etc. (There was an error in the "oniion" diagram you posted earlier. The oxygen burning shell is never outside the neon-burning shell. That is also a consequence of nuclear physics.)
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
This is one of the hazards of nuclear physics. Given the very short range nature of the strong force, nuclear physics is basically *all* QM. (And all of the principle particles are fermions.) It rules which nuclei are stable, what reactions can happen, the structure of nuclei, etc. Stellar astrophysics (very much including the stuff you've been discussing here) is heavily influence by the nuclear physics. In this case what nuclei are most stable, what burning stages happen (and under what conditions), the properties of the core rebound, etc. (There was an error in the "oniion" diagram you posted earlier. The oxygen burning shell is never outside the neon-burning shell. That is also a consequence of nuclear physics.)
This presents another challenge for GPT-4o to spot the error in the onion diagram.
It eventually found the error.

Star_question.png
 
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,388
7,700
25
WI
✟644,558.00
Country
United States
Faith
Christian
Marital Status
Single
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
It took the AI three times to find the error. Seems that the AI is not there yet. :)
It showed a human like behaviour by assuming there was initially nothing wrong with the image until it was prompted to find the error.
This is still a massive improvement over what probably would have been a gibberish response 12 months ago.
A good catch by @Hans Blaster for picking up the error.
 
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,388
7,700
25
WI
✟644,558.00
Country
United States
Faith
Christian
Marital Status
Single
It showed a human like behaviour by assuming there was initially nothing wrong with the image until it was prompted to find the error.
This is still a massive improvement over what probably would have been a gibberish response 12 months ago.
A good catch by @Hans Blaster for picking up the error.
I did not pick up the error either, cos it has been a while since I have learned about supernova in documentaries and an astronomy book, plus, we know more about supernova now in 2024 compared to a decade or so ago.

AI has improved a lot since 2023. We will have to see what the next 12 months will bring.
 
Upvote 0

Hans Blaster

Hood was a loser.
Mar 11, 2017
21,592
16,293
55
USA
✟409,899.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
This presents another challenge for GPT-4o to spot the error in the onion diagram.
It eventually found the error.

View attachment 353878
Hmm. The first corrective inquiry demonstrates that it doesn't understand that a schematic diagram is schematic. I think any middle school science student shouldn't have a problem understanding that the diagram is schematic as they have seen many such things before even if they have never seen one of the inside of a star.

When you point out the specific nature of the error, it is able to assemble the correction from the assimilated data. (I think it read the whole internet or something.)


This fits with the corrections you have given previously on mathematical errors. The basic problem with these "AI"s seems to be that they are sometimes treated as "expert systems" and they are not general "experts".
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
Hmm. The first corrective inquiry demonstrates that it doesn't understand that a schematic diagram is schematic. I think any middle school science student shouldn't have a problem understanding that the diagram is schematic as they have seen many such things before even if they have never seen one of the inside of a star.

When you point out the specific nature of the error, it is able to assemble the correction from the assimilated data. (I think it read the whole internet or something.)


This fits with the corrections you have given previously on mathematical errors. The basic problem with these "AI"s seems to be that they are sometimes treated as "expert systems" and they are not general "experts".
Unlike AlphaZero which uses nearly 100% reinforcement learning without human intervention and plays chess at a level humans cannot compete against let alone understand, by comparison GPT-4o is more of a sophisticated search engine still very much reliant on human input.

training.png


If or when a LLM can reach a much higher level of reinforcement learning things will get interesting.
 
  • Informative
Reactions: AlexB23
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
Here is another test for GPT-4o.
AI writes its own code (the most popular being Python) in making responses.

I wrote up a code in Python which inputted a function.
To make things more challenging I did not use cartesian coordinates but polar coordinates.

The inputted was function r = cos(4a) (in Python entered as np.cos(4a) using the numpy module).

python_sj_code.png


Rather than getting a familiar sine wave type graph, in polar coordinates the graph has of a petal like shape.

I asked GPT-4o to write up a Python code to generate the graph.

python_gpt_response.png


GPT-4o was wrong, while it recognized the image was mapped in polar coordinates it incorrectly stated the equation was of the form r = cos(5θ) instead of r = cos(4θ).

Running GPT-4o's generated Python code produced the following graph.

python_GPT_output.png

So near and yet so far while GPT-4o can generate a Python code it couldn't count the number of petals in the output of my code.
 
Last edited:
  • Like
Reactions: Hans Blaster
Upvote 0

Hans Blaster

Hood was a loser.
Mar 11, 2017
21,592
16,293
55
USA
✟409,899.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Here is another test for GPT-4o.
AI writes its own code (the most popular being Python) in making responses.

I wrote up a code in Python which inputted a function.
To make things more challenging I did not use cartesian coordinates but polar coordinates.

The inputted was function r = cos(4a) (in Python entered as np.cos(4a) using the numpy module).

View attachment 354139

Rather than getting a familiar sine wave type graph, in polar coordinates the graph has of a petal like shape.

I asked GPT-4o to write up a Python code to generate the graph.

View attachment 354140

GPT-4o was wrong, while it recognized the image was mapped in polar coordinates it incorrectly stated the equation was of the form r = cos(5θ) instead of r = cos(4θ).

Running GPT-4o's generated Python code produced the following graph.

So near and yet so far while GPT-4o can generate a Python code it couldn't count the number of petals in the output of my code.
It really needs a 3 year old to do the hard stuff -- counting -- for it.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
It really needs a 3 year old to do the hard stuff -- counting -- for it.
What is so frustrating an AI can identify the image as a mathematical curve, write a program to reproduce it, only to mess things up because it didn't count the number of petals in the image.

GPT-4o provided an "excuse" for the foul up.

python_correction.png
 
  • Like
Reactions: Hans Blaster
Upvote 0

Hans Blaster

Hood was a loser.
Mar 11, 2017
21,592
16,293
55
USA
✟409,899.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
What is so frustrating an AI can identify the image as a mathematical curve, write a program to reproduce it, only to mess things up because it didn't count the number of petals in the image.

GPT-4o provided an "excuse" for the foul up.
I likely excuse I'm sure (eyeroll).
Perhaps the best "first question" to ask after getting the original code is to ask it to show the image generated by that bit of generated python and determine if it matches the inputted image.

A student tasked with the same question would likely first do a search to identify the curve, obtain the basic mathematical form, identify and extract likely parameters, write code, *TEST CODE*, make corrections as necessary. Why these AI thingies don't bother checking there work, I can not say. (7 out of 10)
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
I likely excuse I'm sure (eyeroll).

Perhaps the best "first question" to ask after getting the original code is to ask it to show the image generated by that bit of generated python and determine if it matches the inputted image.

A student tasked with the same question would likely first do a search to identify the curve, obtain the basic mathematical form, identify and extract likely parameters, write code, *TEST CODE*, make corrections as necessary. Why these AI thingies don't bother checking there work, I can not say. (7 out of 10)
I think a major weakness in AI is in evaluating images.
In another thread ( Further evaluation of windmills contributing to the Greenhouse effect as being pseudoscience. ), GPT-4o badly failed a fluid mechanics exam when the paper was scanned and inputted as an image file; by comparison it passed with flying colours when the exam was inputted directly as a message.

Taking this into consideration I decided to give GPT-4o my BASIC program of the generation of the graph r = cos(4θ) and as the challenge to see if it could translate the BASIC program into Python and make sense of the resultant image.

Here is the BASIC program and image output.

python_sj_basic _output.png


Here is GPT-4o's translation into Python and image output.

python_output_GPT.png


Sigh once again so near and yet so far, I have no idea why GPT-4o has incorporated orthogonal axes and what the values on these axes are supposed to mean.
It also flipped the 90 and 270 degrees labels indicating the graph is based on the polar equation r = cos(-4θ) instead of r = cos(4θ).
 
  • Like
Reactions: Hans Blaster
Upvote 0

sjastro

Newbie
May 14, 2014
5,745
4,677
✟347,240.00
Faith
Christian
Marital Status
Single
I decided to ask GPT-4o the question posed in my previous post.

explanation.png


At least GPO-4o provided a coherent response, the values on the x and y axes refer to the BASIC program's graphics window where each pixel value on the screen is defined by an (x,y) coordinate.
This is not the sort of information that should be conveyed with the graph and only serves to complicate the information.
 
Upvote 0