Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
So I get that the plan (which might get encoded directly into some program) might have problems if it attempts to make explicit references to something which may well be random .. so wouldn't the programmer just avoid doing that by creating a different set of rules which aren't dependent on random things (as you say there: the position of other vehicles). Eg: Rule #1: always leave the immobile car 2 metres from any other two immobile (parked) vehicles(?) .. or something like that?That is part of the decision making AI has to make.
A driverless car is also expected to safely overtake, brake at a safe distance and know how to park which depends on the position of other vehicles and is random.
So I get that the plan (which might get encoded directly into some program) might have problems if it attempts to make explicit references to something which may well be random .. so wouldn't the programmer just avoid doing that by creating a different set of rules which aren't dependent on random things (as you say there: the position of other vehicles). Eg: Rule #1: always leave the immobile car 2 metres from any other two immobile (parked) vehicles(?) .. or something like that?
So, there's inaccuracies creeping into sensor measurements, which may have more impact in the dynamic cases .. but there are other arrows in AI's quiver to overcome those inaccuracies, which we don't have (such as: pretty well infinite perfect memory retention/retrieval, and pretty well instantaneous, perfect application of uniniterrupted logical focus, etc ).But when I got into industry, I was frustrated to find the code is only one challenge. The more difficult challenges are the placement & durability of the sensors, and how easily they're fooled or compromised by something like getting covered in mud.
So, there's inaccuracies creeping into sensor measurements, which may have more impact in the dynamic cases .. but there are other arrows in AI's quiver to overcome those inaccuracies, which we don't have (such as: pretty well infinite perfect memory retention/retrieval, and pretty well instantaneous, perfect application of uniniterrupted logical focus, etc ).
Swings and roundabouts there .. and nothing which comes to mind which would distinguish AI from human intelligence there .. (other than by way of us actually knowing how the two go about 'being intelligent', as the basis of saying the two are fundamentally different intelligences)?
I think what I'm getting at here, is that the complex responses which we humans apparently employ in order to navigate an evidently complex universe, are more or less intuitive to us, yet we are (mostly) still following a fairly simple set of rules .. survival, laziness (energy conservation), avoidance of pain bad, smells, unpleasant images, etc.
I suppose then if we are able to distinguish what those rules actually are in a given context, (which involves a lot of hard work), then I would suppose those rule could be encoded into AI software(?)
The behaviours of that AI software, would then display similar (but nonetheless, slightly different) appearances as our own under those conditions(?) I don't think the individual responses would be predictable in advance, but the overall general behaviours would be very recognisable to us (intuitively so)?
Is that what we're talking about when it comes to AI 'intelligence' in this thread? (whilst trying to avoid the definitions bottleneck?)
Just firewall those machines against this:
Galatians 5:19 Now the works of the flesh are manifest, which are these; Adultery, fornication, uncleanness, lasciviousness,
20 Idolatry, witchcraft, hatred, variance, emulations, wrath, strife, seditions, heresies,
21 Envyings, murders, drunkenness, revellings, and such like:
You accuse me of obfuscation. I will simply note I am in good company. What I repeatedly ask you to do, and what you repeatedly refuse to do, is define your terms. For someone who claims to be a physicist, it perplexes me that you won't do it. Regardless, in this video Feynman is asked to answer whether machines are intelligent. Almost the first comment he makes is that in order to answer, intelligence must be defined. (see 0:28)
You deferred on that point. So, I took the liberty of doing it myself. Since, however, you declined to share, I likewise decline to share how I came to that conclusion ... though if you read my earlier replies, you'll see that actually I've already answered the question.
Good day, sir.
This topic intrigues me because I lean towards a philosophy of Science which acknowledges (incorporates?) the human mind of the observer. (I think this approach is more consistent than say, one which relies on philosophical realism, (aka: the universe exists independently from our minds). One conclusion that can be formed with the mind dependency mode is that what we see happening with human minds, is a continual process of exploring its own perceptions (and thinking capabilities).It may be impressive, but I'm disputing that the AI is intelligent. The Sistine Chapel is impressive, but the paint brush isn't the artist.
I read somewhere traditional engines are using common Stockfish trained neural nets practically reducing them to Stockfish clones.I'd like to point out that traditional chess engines surpassed the benchmark made by Alpha Zero and the strongest engines currently are those that use a combination of traditional and neural network techniques.
I never claimed to be physicist, in fact I have gone out of my way in other threads to point out my background is applied mathematics.
One can only wonder if Feynman would have the same opinion if he was alive today.
I don’t have to explain in my terms because it is based on what the experts claim which is why is I have referred you to the peer reviewed paper in dealing with your questions.
(Confession: I only just read the paper).They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero uses a deep neural network (p,v) = fθ(s) with parameters θ. This neural network fθ(s) takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a, and a scalar value v estimating the expected outcome z of the game from position s, v ≈ E[z|s]. AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games. ...
(Confession: I only just read the paper).
In the case of AlphaZero, I think those terms are defined by the algorithms they're using?
(Don't ask me to explain the principles behind them though ..)
They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
I think the people who wrote the paper know what the term means though .. and should be given the benefit of any doubts one may have, particularly where one doesn't understand their definition and where AlphaZero is clearly able to demonstrate its stuff!?.. I'm expecting that means they have a familiarity with the published work and don't need to say "go read the paper yourself"
Meh .. I think that might just be an ego based opinion, there(?) .. I mean .. especially, apparently, given that most of us don't yet understand the algorithms without expending a bucket of effort/time to do so ..(?)J_B_ said:They should be able to explain it themselves.
Bummer, eh? .. (I'm familiar with that particular feeling).J_B_ said:The paper references work by Silver published in Nature as the source of "reinforcement learning". I don't have access to Nature to see what that paper says.
I think the people who wrote the paper know what the term means though ...
Since the paper is not to your satisfaction because of vague terms such as "learn" you want me to fill in the gaps?
I don't pretend to be an expert on the subject.
The level at which this paper is being discussed in this thread only requires a basic understanding of English.If you're going to defend it, say the meaning is clear, and accuse me of obfuscation for thinking it's not clear, then yes.
That's the first time I recall seeing you say that. You needn't bother giving me an answer then.
I guess that's technically correct but there must be hundreds/thousands(?) of person years of human research gone into: trial/error statistics, game theory and search algorithm experience etc, that's been poured into distilling the present 'winning formula'/algorithm choices?It states categorically there was no human involvement in the self training process.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?