.. (only the female ones, though).Just using brute force, AI can look 15 moves ahead in any relationship. We're doomed to fall for their charms.
Upvote
0
.. (only the female ones, though).Just using brute force, AI can look 15 moves ahead in any relationship. We're doomed to fall for their charms.
You expect me to ask this out on a date?Shoulda asked her out on date during the game!!
Once you try AI, you'll be burning for some machine learning.You expect me to ask this out on a date?
I prefer my dates to be of the flesh and blood variety.
The problem is that "teach" and "learn" in normal use have traditionally implied some receptive understanding consciousness....Since you are the one that has the issues about "teach" or "learn" it's up to you to demonstrate the link is consistent with your definitions.
You can start by addressing this quote in the link.
The problem is that "teach" and "learn" in normal use have traditionally implied some receptive understanding consciousness.
Tech people looove to take words from normal use and apply them metaphorically to their creations. Rarely do all the implications of the traditional meaning transfer over to the new application.
I think the mismatch is happening here. @J_B_ wants a demonstration of the traditional notion of learning. AlphaWhatever cant provide.
Well now youre downplaying the part of learning that the AI actually did. The idea of a machine programmed with no strategy but just the rules and a directive to win, and then finding, assimilating, and deploying strategies superior to those humans had ever come up with. That is pretty astonishing and belongs in some kind of different category from ordinary human tool making....Computers don't defeat chess masters. People defeat chess masters. grin.
Well now youre downplaying the part of learning that the AI actually did.
The idea of a machine programmed with no strategy but just the rules and a directive to win, and then finding, assimilating, and deploying strategies superior to those humans had ever come up with. That is pretty astonishing and belongs in some kind of different category from ordinary human tool making.
Can I even say the machine "does" anything? Or is that word too anthropomorphic?....I will note the common anthropomorphic tendencies people have. That's the phenomena at work here when referring to the computer "finding, assimilating, and deploying strategies". You're attributing human activities to a machine that is mindlessly executing lines of code (adaptive or otherwise)....
Can I even say the machine "does" anything? Or is that word too anthropomorphic?
If I build a bunch of robots and give them the capacity to do right or wrong and I have bestowed upon them personalities in the "image" of myself, I should be ultimately responsible for any harm they commit since I could have made them differently.
Would I be reading too much into this if I detected some commentary on theodicy?
No worries. I don't hold God responsible for anything at all, personally.I won't try to dissuade you. Feel free to hold God responsible for whatever you please.
Yes, I think so. A creator is culpable for the behavior of their creation.Maybe we at least agree the creators of AI are held responsible for what AI does?
The question is, what do we mean by feeling? IOW we define it clearly enough in this context, and without equivocation, to say what it would take for a machine to have feelings?Sure it would. It's just a different kind of 'feeling' than humans, or other creatures, have. You're being rather anthropocentric here.
One argument is that if you give an AI too much leeway (cognitive flexibility) in terms of avoiding harm to itself, it might find neutralising the threat to be an effective response (hence Asimov's Laws of Robotics).Are you suggesting that if someone starts chucking rocks at a drone, onboard (or network) AI should have an option to retaliate?
I think it depends on the 'depth' of the AI - for example, Alpha Go came up with a move, called 'miraculous' and 'sublime' by the world's top players (some hyperbole, perhaps), that no-one had envisaged, that no human player would have played, and that was not part of any explicit coding or strategy from its creators. AIUI there is an AI poker system that, without explicit programming or strategy for it, learns to exploit its opponents' weaknesses and will even play poor moves in the short term in order to exploit this 'understanding'.Further, let me ask you this: You have not denied computational speed is the key to chess AI's success, which is a tacit agreement. As such, I, the slow wit, have identified the weakness that would allow me to defeat the chess AI quick wit. Would the chess AI quick wit be able to do the same (identify my weakness) if a human didn't add that feature to it's programming? I think not. As such, the chess AI quick wit would not object to my suggested rule change, and would have no reaction whatsoever to the fact that it now loses every game. It would simply chug along, doing it's computations. I simply can't label that "intelligence".