Log in
Register
Search
Search titles only
By:
Search titles only
By:
Forums
New posts
Forum list
Search forums
Leaderboards
Games
Our Blog
Blogs
New entries
New comments
Blog list
Search blogs
Credits
Transactions
Shop
Blessings: ✟0.00
Tickets
Open new ticket
Watched
Donate
Log in
Register
Search
Search titles only
By:
Search titles only
By:
More options
Toggle width
Share this page
Share this page
Share
Reddit
Pinterest
Tumblr
WhatsApp
Email
Share
Link
Menu
Install the app
Install
Forums
Discussion and Debate
Discussion and Debate
Physical & Life Sciences
AI & Trust
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="helmut" data-source="post: 77653299" data-attributes="member: 206559"><p>AI is not a program in the classical sense.</p><p></p><p>There is a scheme (say, neural network), by which a solution is arrived. The rules for this are rather clear. But the result is a parametrized network, and no human can explain why these parameters are this way.</p><p></p><p>You know how the AI got at that: examining test cases where the correct answer was known, using a predefined set of rules how to change the network when an examples is added as a learn case.</p><p></p><p>This would probably done by assembling tons of examples where dismantling occurred, and to evaluate every example how good the work was dons and to feed this into the AI as learn cases …</p><p></p><p>An AI has no conscious, so »lying« is the wrong term there.</p><p></p><p>But I once saw an article with a AI that did some sort of »cheating«: To detect whether a picture showed horses, it analyzed the descriptive text in the bottom of the pictures and did not pay much on the rest of the picture …</p><p></p><p>That was, of course, the result of badly designed training (using mostly pictures from a database which »tagged« a description into every photo).</p></blockquote><p></p>
[QUOTE="helmut, post: 77653299, member: 206559"] AI is not a program in the classical sense. There is a scheme (say, neural network), by which a solution is arrived. The rules for this are rather clear. But the result is a parametrized network, and no human can explain why these parameters are this way. You know how the AI got at that: examining test cases where the correct answer was known, using a predefined set of rules how to change the network when an examples is added as a learn case. This would probably done by assembling tons of examples where dismantling occurred, and to evaluate every example how good the work was dons and to feed this into the AI as learn cases … An AI has no conscious, so »lying« is the wrong term there. But I once saw an article with a AI that did some sort of »cheating«: To detect whether a picture showed horses, it analyzed the descriptive text in the bottom of the pictures and did not pay much on the rest of the picture … That was, of course, the result of badly designed training (using mostly pictures from a database which »tagged« a description into every photo). [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
Discussion and Debate
Discussion and Debate
Physical & Life Sciences
AI & Trust
Top
Bottom