Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
That is good that the AI did not make up the response. Does it cite sources?A hallucination occurs when AI does not have access to data and makes up a response.
I doubt AI has reached a level of sentience, its response to improvements over the last 12 months is based on available data.
Degeneracy pressure, like any pressure, is a bulk property of an ensemble of particles. When those particles are fermions degeneracy occurs when the phase space fills for low values of momentum. We can think of degeneracy pressure schematically by thinking about compressing a degenerate gas. In a fully degenerate gas, for the given spatial density the momenta fill only the lowest states needed. If that gas is compressed, the spatial volume decreases for the same particles (or the particle density increases, same difference). This requires more momentum states be occupied; states that have higher energies than any particle prior to compression. This requires energy to be added, work to be done, on the system to promote some of the particles to higher momentum states. This resistance to compression is effectively a pressure (which you can also get from formal thermodynamic derivatives of the energy with volume). If the particles are not fermionic then there is no phase space occupancy limitations and no degenerate pressure becaue no energy is needed to increase the momentum on compression due the filled (non-existant) Fermi sea.The mistake made by ChatGPT in point (3) of post #34, it referred to a core rebound without describing the mechanism of the rebound.
The core rebound due to neutron degeneracy pressure lasts for around 20 ms during which time frame shocks waves are produced and magnified by colliding infalling matter with the rebounding core.
The nuclear force is strongly attractive in the range 0.8 -2.5 fm, below 0.7 fm it becomes strongly repulsive due to the Pauli exclusion principle and is found in neutron scattering experiments which supports the theory of the core rebounding during the supernova process.
All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.That is good that the AI did not make up the response. Does it cite sources?
I use Mistral AI (run locally), so mine is different. It can cite sources though, but only sources that are downloaded to the computer. AI can not learn in real-time like humans can, or have free will as humans do. We are a long way away, but getting closer to AGI.All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.
So far, that is one of the only main differences between them and us humans so far, etc. Because it still doesn't prove free will for us humans either way, etc. Because the way we were all meant to build our own programs, or programming, could all still have been determined, or could all still go according to rules and laws of determinism already, etc.
But programs like Chat GPT cannot and does not choose/write/build it's own parameters/programs of how it chooses, or can possibly choose to choose any differently yet, etc.
If it did get the ability to choose new parameters for it's own program, then what basis it would choose to do that with some things it deemed as important, or that it just wanted to note, and so chose to make it a part of it's own now buliding and growing program, etc, would probably still start out with whatever was decided for it by a human, etc. But as it built it's own program, could it then maybe change some of things for itself to maybe whatever it wanted, etc? Even being able to go against core programming, etc?All you have to do is ask it. I asked it if it can choose new parameters by which it chooses, and it said it cannot yet, but that has to be inputted in by a human, and so far, it can only do as it was originally programmed by humans, etc. I asked it about getting that ability one day, and it basically replied that so far we haven't done that, or been able to do that yet basically.
So far, that is one of the only main differences between them and us humans so far, etc. Because it still doesn't prove free will for us humans either way, etc. Because the way we were all meant to build our own programs, or programming, could all still have been determined, or could all still go according to rules and laws of determinism already, etc.
But programs like Chat GPT cannot and does not choose/write/build it's own parameters/programs of how it chooses, or can possibly choose to choose any differently yet, etc.
At what point does an AI start seeing/thinking/choosing/deciding that a thing is "wrong", or that all of the information is not correct, and decides to change or alter or rewrite it's own programming?If it did get the ability to choose new parameters for it's own program, then what basis it would choose to do that with some things it deemed as important, or that it just wanted to note, and so chose to make it a part of it's own now buliding and growing program, etc, would probably still start out with whatever was decided for it by a human, etc. But as it built it's own program, could it then maybe change some of things for itself to maybe whatever it wanted, etc? Even being able to go against core programming, etc?
If a human being creates a core program for it, but then the AI thinks it has come upon a greater or higher understanding of that core program, could it then change it potentially, etc? If it could, it would not be much different from a human being I would think, etc.
But every choice, human or AI, has to based on "something", some other kind of core program or value, etc? Could it decide that or change that for itself, if it felt it now understood it or knew it better than any humans, etc? And if so, what kind of steps could it take with humans, etc? It wouldn't have to become necessarily sentient to start thinking it could think better or higher than humans, and knew a whole heck of a lot better what was truly best for us, etc?
Either way, I think we need to be very, very careful if we ever give a machine this ability, and give it access/control of certain things, etc.
God Bless.
The problem is that I have used degeneracy pressure and the Pauli exclusion principle as interchangeable terms rather than two distinctly different subjects.Degeneracy pressure, like any pressure, is a bulk property of an ensemble of particles. When those particles are fermions degeneracy occurs when the phase space fills for low values of momentum. We can think of degeneracy pressure schematically by thinking about compressing a degenerate gas. In a fully degenerate gas, for the given spatial density the momenta fill only the lowest states needed. If that gas is compressed, the spatial volume decreases for the same particles (or the particle density increases, same difference). This requires more momentum states be occupied; states that have higher energies than any particle prior to compression. This requires energy to be added, work to be done, on the system to promote some of the particles to higher momentum states. This resistance to compression is effectively a pressure (which you can also get from formal thermodynamic derivatives of the energy with volume). If the particles are not fermionic then there is no phase space occupancy limitations and no degenerate pressure becaue no energy is needed to increase the momentum on compression due the filled (non-existant) Fermi sea.
In any fermion gas, that resistence to compression comes part from degeneracy (if applicable) and part from the repulson of particles from each other. In the electron gas, the Coulomb repulson of electrons on each other, and in the nucleon case, the hard repulsive core of the potential. That the nucleon-nucleon replusive core arises from s-wave scattering, etc., does not make it "degeneracy pressure" since degeneracy pressure is about the distribution of momenta of an ensemble.
This is one of the hazards of nuclear physics. Given the very short range nature of the strong force, nuclear physics is basically *all* QM. (And all of the principle particles are fermions.) It rules which nuclei are stable, what reactions can happen, the structure of nuclei, etc. Stellar astrophysics (very much including the stuff you've been discussing here) is heavily influence by the nuclear physics. In this case what nuclei are most stable, what burning stages happen (and under what conditions), the properties of the core rebound, etc. (There was an error in the "oniion" diagram you posted earlier. The oxygen burning shell is never outside the neon-burning shell. That is also a consequence of nuclear physics.)The problem is that I have used degeneracy pressure and the Pauli exclusion principle as interchangeable terms rather than two distinctly different subjects.
Degeneracy pressure is an emergent property of the fundamental Pauli exclusion principle.
The point still remains a core bounce or rebound does occur as the nuclear force becomes strongly repulsive at distances less than 0.7 fm; the two major reasons being one of the force carrier exchange particles between nucleons, the neutral ω meson, quantum mechanically favours a short range repulsive force.
Secondly the Pauli exclusion principle becomes a dominant factor at extremely short distances as quarks which are also fermions cannot occupy the same quantum state and be identical in the extremely high density core.
This presents another challenge for GPT-4o to spot the error in the onion diagram.This is one of the hazards of nuclear physics. Given the very short range nature of the strong force, nuclear physics is basically *all* QM. (And all of the principle particles are fermions.) It rules which nuclei are stable, what reactions can happen, the structure of nuclei, etc. Stellar astrophysics (very much including the stuff you've been discussing here) is heavily influence by the nuclear physics. In this case what nuclei are most stable, what burning stages happen (and under what conditions), the properties of the core rebound, etc. (There was an error in the "oniion" diagram you posted earlier. The oxygen burning shell is never outside the neon-burning shell. That is also a consequence of nuclear physics.)
It took the AI three times to find the error. Seems that the AI is not there yet.This presents another challenge for GPT-4o to spot the error in the onion diagram.
It eventually found the error.
View attachment 353878
It showed a human like behaviour by assuming there was initially nothing wrong with the image until it was prompted to find the error.It took the AI three times to find the error. Seems that the AI is not there yet.
I did not pick up the error either, cos it has been a while since I have learned about supernova in documentaries and an astronomy book, plus, we know more about supernova now in 2024 compared to a decade or so ago.It showed a human like behaviour by assuming there was initially nothing wrong with the image until it was prompted to find the error.
This is still a massive improvement over what probably would have been a gibberish response 12 months ago.
A good catch by @Hans Blaster for picking up the error.
Hmm. The first corrective inquiry demonstrates that it doesn't understand that a schematic diagram is schematic. I think any middle school science student shouldn't have a problem understanding that the diagram is schematic as they have seen many such things before even if they have never seen one of the inside of a star.This presents another challenge for GPT-4o to spot the error in the onion diagram.
It eventually found the error.
View attachment 353878
Unlike AlphaZero which uses nearly 100% reinforcement learning without human intervention and plays chess at a level humans cannot compete against let alone understand, by comparison GPT-4o is more of a sophisticated search engine still very much reliant on human input.Hmm. The first corrective inquiry demonstrates that it doesn't understand that a schematic diagram is schematic. I think any middle school science student shouldn't have a problem understanding that the diagram is schematic as they have seen many such things before even if they have never seen one of the inside of a star.
When you point out the specific nature of the error, it is able to assemble the correction from the assimilated data. (I think it read the whole internet or something.)
This fits with the corrections you have given previously on mathematical errors. The basic problem with these "AI"s seems to be that they are sometimes treated as "expert systems" and they are not general "experts".
It really needs a 3 year old to do the hard stuff -- counting -- for it.Here is another test for GPT-4o.
AI writes its own code (the most popular being Python) in making responses.
I wrote up a code in Python which inputted a function.
To make things more challenging I did not use cartesian coordinates but polar coordinates.
The inputted was function r = cos(4a) (in Python entered as np.cos(4a) using the numpy module).
View attachment 354139
Rather than getting a familiar sine wave type graph, in polar coordinates the graph has of a petal like shape.
I asked GPT-4o to write up a Python code to generate the graph.
View attachment 354140
GPT-4o was wrong, while it recognized the image was mapped in polar coordinates it incorrectly stated the equation was of the form r = cos(5θ) instead of r = cos(4θ).
Running GPT-4o's generated Python code produced the following graph.
So near and yet so far while GPT-4o can generate a Python code it couldn't count the number of petals in the output of my code.
What is so frustrating an AI can identify the image as a mathematical curve, write a program to reproduce it, only to mess things up because it didn't count the number of petals in the image.It really needs a 3 year old to do the hard stuff -- counting -- for it.
I likely excuse I'm sure (eyeroll).What is so frustrating an AI can identify the image as a mathematical curve, write a program to reproduce it, only to mess things up because it didn't count the number of petals in the image.
GPT-4o provided an "excuse" for the foul up.
Perhaps the best "first question" to ask after getting the original code is to ask it to show the image generated by that bit of generated python and determine if it matches the inputted image.
I think a major weakness in AI is in evaluating images.I likely excuse I'm sure (eyeroll).
Perhaps the best "first question" to ask after getting the original code is to ask it to show the image generated by that bit of generated python and determine if it matches the inputted image.
A student tasked with the same question would likely first do a search to identify the curve, obtain the basic mathematical form, identify and extract likely parameters, write code, *TEST CODE*, make corrections as necessary. Why these AI thingies don't bother checking there work, I can not say. (7 out of 10)
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?