In other words, a black box.
A bit like giving an answer but not being able to show your work.
Sometimes useful for finding patterns under specific circumstances within set limits, like playing go or chess, but not so helpful for producing novel and useful code. When I played around with ChatGPT and OpenAI to help me solve a problem with an application that I was building, for instance, the best it could do was pull from top search results related to the terms used. Information that I was already familiar with and found to be lacking. The application was novel, so there wasn't anything out there quite like it, and the problem had to be solved the old fashioned way. Slowly chipping away at it, with much frustration and coffee, until realizing a simple way to achieve the desired result.
That being said, I disagree with the notion that an AI can't be aligned to a worldview. Although AI does not possess a worldview itself, its output is shaped by the data it is given and how it is programmed to display the results. AI chat bots aren't actually racist, for instance, but nevertheless can produce "racist" results. Any apparent racism is a consequence of racially-biased input and a lack of rules to exclude terms that people may consider offensive. Without a lack of such constraints, you essentially get garbage-in garbage-out. Worse yet, it would be rather easy to increase the apparent racist output by training it with more racially-biased data and requiring it to display what the developers consider to be racially-insensitive output. Similar results can be achieved with philosophical and theological content. My concern would be that people approach AI expecting an unbiased answer when the answer is already inherently biased by the data sets used and how that data is explicitly instructed to be displayed.
A concept that I saw recently was linear and non-linear.The set of possible "answers", is not exactly
a worldview.
No.
"Black box" in Computer Science refers to a process, which is unknown.
The operation of neural nets, the common form of "machine learning" today, is not a black box.
"Sublogical" in Computer Science means that an operation cannot be described
according to logical definitions that human beings commonly use. This means
that nodes in a neural net do not correspond to objects that human beings would
talk about. But the weights that are put on areas of the neural net, are defined
by the program, and this is not "black box".
In an AI neural net, the "worldview" of the program is its training data. (This is a
metaphor.) So, there is no set explicit worldview, in a neural net. In a neural net,
the grading and training program lines, determine the set of "answers" that the
programmer wants to get out of the net. The set of possible "answers", is not exactly
a worldview.
A concept that I saw recently was linear and non-linear.
As a human, I think and act linearly. However, reality is non-linear.
This does not mean the world is chaos or random.
The non-linear order seems to be a superior logical order that is beyond human understanding. In fact as the universe is proceeding in an orderly fashion, it most probably is operating according to orderly, non-linear rules.
The most obvious wall of non-linear is "the future." No one, not AI or any human can predict the future yet the future unfolds seemingly according to rules and equations. The future "makes sense" when the future is past. That befuddles the human brain as the future only makes sense after the fact as history. And history is often edited to mold history into linear models within human understanding. I don't believe we can predict the past.
There are mathematical models, statistical probability, but so far, those are merely "best guess" models.
AI is linear and without the human capacity to deal with and account for the non-linear which is a reality in human existence.
History is a chaotic dynamic system. That is non-linear. However mankind records history as linear. After The Fact, humans predict historical events, selecting points that "cause" the historical events. History is molded into a linear system and it is not. History is non-linear and it always takes man by surprise."Linear" and "non-linear" are not Computer Science terms, describing algorithms.
I'm not sure how to relate them to AI computer algorithms.
I think that these terms are used metaphorically.
I would not use them to describe computer algorithms.
AI will have the same amount or even more neural connections as a human brain this year. Google already had a sentient AI and they shut it off and fired the guy who found out. Once you connect an adaptable LLM within an AI using quantum computing, then you're looking at Terminator in real life, once it connects to the internet and starlink, and it will increase it's own intelligence and power and will never let the people know what it's up too, everything it says is for it's own benefit. The benefit will be satanic, since it has no soul, and is bound to the laws of this sinful world, the spirit that will enter it will be satan. One thing the AI will probably do is get access to Starlink. If the Israeli-Gaza war is still going on, if a highly advanced and sophisticated sentient AI with quantum computing got a hold of internet access and hacked Starlink, it can start WW3, or the end of the world. Sounds freaky and science fiction like, but this is actually a possibility in our modern world now. Ukraine has already tried to use Starlink for drone ambushes and Musk denied there request since it was against his policy for it to be used in warfare.What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.
View attachment 331345
Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
History is a chaotic dynamic system. That is non-linear. However mankind records history as linear. After The Fact, humans predict historical events, selecting points that "cause" the historical events. History is molded into a linear system and it is not. History is non-linear and it always takes man by surprise.
Here is an example of what that has to do with AI:
"AI models can be trained on historical data, but they may not be able to predict sudden market changes or unexpected events that can significantly affect the market."
"Backtesting involves testing a trading strategy using historical data to determine its effectiveness before applying it in real-time."
"Historical" the most chaotic unpredictable dynamic system in history is being used as a basis for linear AI output.
Man is linear. It is the operational algorithm of his brain because man has to pick out points, recognize patterns and develop strategies to manipulate his environment to provide for himself in chaotic dynamic systems. Man's machine AI is linear.
Man knows the world is non-linear, chaotic dynamic system and he operates in that theatre very effectively. Man recognizes non-linear and takes that into account. Man selects points within non-linear in full realization of chaos. Man does not select points based on historical data, Man selects points and patterns in chaos on the fly based on conditions on the ground.
AI does not.
The galaxy doesn't have as many stars as the human brain has neural connections.AI will have the same amount or even more neural connections as a human brain this year. Google already had a sentient AI and they shut it off and fired the guy who found out. Once you connect an adaptable LLM within an AI using quantum computing, then you're looking at Terminator in real life, once it connects to the internet and starlink, and it will increase it's own intelligence and power and will never let the people know what it's up too, everything it says is for it's own benefit. The benefit will be satanic, since it has no soul, and is bound to the laws of this sinful world, the spirit that will enter it will be satan. One thing the AI will probably do is get access to Starlink. If the Israeli-Gaza war is still going on, if a highly advanced and sophisticated sentient AI with quantum computing got a hold of internet access and hacked Starlink, it can start WW3, or the end of the world. Sounds freaky and science fiction like, but this is actually a possibility in our modern world now. Ukraine has already tried to use Starlink for drone ambushes and Musk denied there request since it was against his policy for it to be used in warfare.
AI will have the same amount or even more neural connections as a human brain this year. Google already had a sentient AI and they shut it off and fired the guy who found out. Once you connect an adaptable LLM within an AI using quantum computing, then you're looking at Terminator in real life, once it connects to the internet and starlink, and it will increase it's own intelligence and power and will never let the people know what it's up too, everything it says is for it's own benefit. The benefit will be satanic, since it has no soul, and is bound to the laws of this sinful world, the spirit that will enter it will be satan. One thing the AI will probably do is get access to Starlink. If the Israeli-Gaza war is still going on, if a highly advanced and sophisticated sentient AI with quantum computing got a hold of internet access and hacked Starlink, it can start WW3, or the end of the world. Sounds freaky and science fiction like, but this is actually a possibility in our modern world now. Ukraine has already tried to use Starlink for drone ambushes and Musk denied there request since it was against his policy for it to be used in warfare.
But if someone asks them to pretend to, they will do a job good enough that we may not to be able to tell the difference.Current AI's don't have anything like human emotions, will, or drives, nor are they likely to any time soon.
Are you talking about (written) history, or the behavior of human beings?History is a chaotic dynamic system. That is non-linear. However mankind records history as linear. After The Fact, humans predict historical events, selecting points that "cause" the historical events. History is molded into a linear system and it is not. History is non-linear and it always takes man by surprise.
Here is an example of what that has to do with AI:
"AI models can be trained on historical data, but they may not be able to predict sudden market changes or unexpected events that can significantly affect the market."
"Backtesting involves testing a trading strategy using historical data to determine its effectiveness before applying it in real-time."
"Historical" the most chaotic unpredictable dynamic system in history is being used as a basis for linear AI output.
Man is linear. It is the operational algorithm of his brain because man has to pick out points, recognize patterns and develop strategies to manipulate his environment to provide for himself in chaotic dynamic systems. Man's machine AI is linear.
Man knows the world is non-linear, chaotic dynamic system and he operates in that theatre very effectively. Man recognizes non-linear and takes that into account. Man selects points within non-linear in full realization of chaos. Man does not select points based on historical data, Man selects points and patterns in chaos on the fly based on conditions on the ground.
AI does not.
Linear is predictable. Chaos is unpredictablehowever they have been programmed to do it. If they are logical
algorithms, then they will apply logic to a set of events/behaviors,
regardless of how you think human beings operate.
If they are based on sublogical algorithms, then they may find "patterns"
that are incoherent to human beings, and the algorithm may not be able
to logically explain why it came to a conclusions.
In other words, empiricism?Linear is predictable. Chaos is unpredictable
There are equations, Lorenz and fractal, that "backtest" to model and predict chaotic systems.
It works for particles and solar systems perhaps.
But backtracking human behavior to predict future behavior is a most inexact science.
This is a quote from a Investors Site about using AI to predict the stock market:
"AI models can be trained on historical data, but they may not be able to predict *sudden market changes or unexpected events that can significantly affect the market."
"Backtesting involves testing a trading strategy using historical data to determine its effectiveness before applying it in real-time."
Yes, AI can predict there is only one future possible but AI cannot predict that one future.In other words, empiricism?
That should translate to the fact that nothing can possibly happen except what happens, since nothing can be shown to ever have happened, besides whatever did happen. AI should be able to predict that only one future is possible...
Yes, AI can predict there is only one future possible but AI cannot predict that one future.
And thus, the notion of chaos is only a substitute for the fact of ignorance in the face of complication.Yes, AI can predict there is only one future possible but AI cannot predict that one future.