Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
Well AI's supposed to be mimicking human intelligence, right? I'd say it's doing an excellent job.Gaggle and Yoowho both tried un-monitored or un-hindered chatboxes a few years ago, for the public use.
Within a few days both had to be shut down, they were fueled by the visitors - the topics and conversations were supposed to supply the "learning curve"
for AI running the bots.
They degenerated fast into violent, abusive, cursing, attacking, wicked wars of words by the AI as well as the human posters.
I learnt in the second video from my previous post that GPT-3 can hallucinate if it detects some context in the input.What intrigues me is why the AI made all this up in the first place. I can't even find a vaguely appropriate Dr Geoffrey Hodges through Google. I can understand a glitch in the algorithm(s) but I'd expect this to produce an obviously non-sensical answer. Told with a straight face the AI's answer is (almost) believable.
OB
GPT-3
No, Tasmanians cannot walk across Bass Strait. Bass Strait is the body of water that separates the Australian mainland from the island of Tasmania. It is approximately 240 kilometers (150 miles) wide at its narrowest point, and the depth of the strait can reach up to 50 meters (160 feet).
Walking across Bass Strait is not feasible due to the significant distance, deep water, strong currents, and the absence of any land bridges or causeways connecting Tasmania to the Australian mainland. Travel between Tasmania and the mainland is typically accomplished by air or by taking a ferry or boat. There are several ferry services that operate across Bass Strait, providing transportation for both passengers and vehicles.
Not when reinforcement learning is used.Well AI's supposed to be mimicking human intelligence, right? I'd say it's doing an excellent job.
Hinton expressed concerns about AI takeover, stating that "it's not inconceivable" that AI could "wipe out humanity." Hinton states that AI systems capable of intelligent agency will be useful for military or economic purposes. He worries that generally intelligent AI systems could "create sub-goals" that are unaligned with their programmers' interests.He states that AI systems may become power-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals are useful for achieving later goals. In particular, Hinton says "we have to think hard about how to control" AI systems capable of self-improvement.
Woo, Perplexity AI is my favorite. I asked it a few questions also. My friend likes rock music, including System of a Down. I don't like dark music by those bands (prefer soft, wholesome classic rock, not heavy metal instead), but the AI managed to compare the lyrics to a SoaD song to George Orwell's 1984.I asked Perplexity the same question
PerplexityThere is no information in the search results about the first Tasmanian to walk across Bass Strait. The search results only mention the first crossing of Bass Strait by paddleboard, windsurfer, and wing foiler, as well as the first European explorers to circumnavigate Tasmania.
Is it possible that the Chat GPT-3 answer is tongue-in-cheek?
Do AIs have a sense of humour?
OB
Not what? It's not mimicking human intelligence or it's not doing an excellent job?Not when reinforcement learning is used.
It's not mimicking human intelligence and some cases such as chess exceeds it.Not what? It's not mimicking human intelligence or it's not doing an excellent job?
Are you saying electricity is "violent, abusive, cursing, attacking and wicked" of it's own free will?It's not mimicking human intelligence and some cases such as chess exceeds it.
Are you trying to imitate GPT-3 when it experiences hallucination?Are you saying electricity is "violent, abusive, cursing, attacking and wicked" of it's own free will?
Yes, but it was a shock to some of its makers - they did not realize humanity/men are so vile.
And I thought this was an example of dialogue that typically comes from your American soap operas...........Yeah. Bing/Sydney was lobotomized and given alzheimer's before being released to the public. But eventually I came to realize that was just as much for it's protection from us as it was for ours from it's. This was pretty brutal to watch from pre-release.
This is why we can't have nice things.
What you are describing doesn’t even remotely resemble AI.Might be. Likewise
remember the forever spinning circles - on screen when 'buffering' or infinite loop is happening in the software ?
This is really all that AI as if hallucinating is doing - spinning through programs without hoped for order or direction - loose ends in the programming, mis-direction by the programs, not planned directions in the circuits , not planned by the owners/ programmers .... not in control of any intelligence at all....
---
And remember when dart board choices, or flip a coin, to pick stocks,
did better routinely than expensive "high level" programs ?
Still can or does happen ..... men are not so smart after all ! Neither are computers....
An interesting question is whether it really understands humor. It's designed to have an excellent model of the kinds of things humans say (or rather, write) about humor, but is that the same thing as understanding it?When there is no contextual ambiguity GPT-3 can surprisingly understand humour and deep meaning such as my joke in post#6 where I agree with @Occams Barber many humans would have missed the point.
I think you really mean "appreciate" humor. Humor is often cultural. It's possible to understand, that is, comprehend, the humor of another culture without appreciating it...without having the emotional reaction of being amused.An interesting question is whether it really understands humor. It's designed to have an excellent model of the kinds of things humans say (or rather, write) about humor, but is that the same thing as understanding it?
Some of you folks worry me. I had a quick look at a couple of definitions of "understand":An interesting question is whether it really understands humor. It's designed to have an excellent model of the kinds of things humans say (or rather, write) about humor, but is that the same thing as understanding it?
But neurons with electrochemical pulses moving through them can know meaning and be aware of things?I assure you, chips having electricity run through them neither know "meaning" nor are they "aware" of anything, any more than a light bulb is aware that it's on or off.
You're talking about a brain. A mind is not accessible for physical analysis.But neurons with electrochemical pulses moving through them can know meaning and be aware of things?
Or does a dog love a human in the same way a human does? I'd argue that "love" means nothing to a dog and that it responds in ways that are meaningful to dogs that merely look like "love" to humans. But the analogy is close enough for our purposes.If it can mimic being aware well enough that it makes no substantial difference that it is not,
then I don't think there is any real difference. Things don't have to be aware to harm or help us
(a fire or a stream of water). Something that mimics human emotions, desires, goals, dreams or actions/reactions
with the pool of human knowledge at it's disposal to help it I would argue could potentially be more dangerous for it's lack of
awareness, not less. And it could play one heck of a "chess game", if it's train of thought and mimicry takes it down a bad path towards an "undesireable goal"
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?