Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
(I forget), are people “in heaven” allowed to wander-off?That is true. We will get close to destroying civilization, but Satan will complete it. Luckily, God will give us a New Jerusalem.
I’ve never quite understand why “oblivion” is “bad”. The multitudinous grand fathers of mine alive during the Punic Wars, passed on exactly what I needed for this life and if I’ve “used it” the way I want, why is that “bad”?Just an observation here, but when Christians say this "us vs. them" stuff like this we just stare at you (metaphorically in this case), shake our heads and blink. Christians "belong" to this world just like every other thing on the planet. The really frustrating part from the atheists' POV, though, is that even if you die peacefully with the belief you're going to "meet your maker" you'll never actually know you were wrong. You'll just be gone. Just like the rest of us.
I am not sure, but I think people in Heaven would have no need to wander off.(I forget), are people “in heaven” allowed to wander-off?
If so, do they have like, a set “time” they have to be back before anyone notices that they’re gone?
I rather like The Good Place depiction of Heaven where you have access to all levels of learning and knowledge, pleasure, comfort and then when you're "done" you can choose to just stop being with no judgement or sadness.I’ve never quite understand why “oblivion” is “bad”. The multitudinous grand fathers of mine alive during the Punic Wars, passed on exactly what I needed for this life and if I’ve “used it” the way I want, why is that “bad”?
I'm a big fan of Jason's problem-solving methods.That was a great show.
It took him just two days to launch his so-called “pink slime” news site
It took him just two days to launch his so-called “pink slime” news site capable of generating and publishing thousands of false stories every day using AI, which could self-fund with ads.
The whole process required, as he put it, “no expertise whatsoever,”
Just an observation here, but when Christians say this "us vs. them" stuff like this we just stare at you (metaphorically in this case), shake our heads and blink. Christians "belong" to this world just like every other living thing on the planet. The really frustrating part from the atheists' POV, though, is that even if you die peacefully with the belief you're going to "meet your maker" you'll never actually know you were wrong. You'll just be gone. Just like the rest of us.
This is a fantastic summation of the challenges\problems with the current and future state of A.I.!From a Computer Science degree, the problem with "artificial intelligence"
products in America, right now, include...
1 They have no moral-ethical model that constrains them.
This would be difficult to implement, and would be expensive.
2 Many of these products are not much more than search engines,
that compile web data on searches. In this approach, they simply
reflect the opinions that they find on the web. But, this is not the
definition of true knowledge or understanding. (This is automating
the ad populam fallacy.)
3 Although these software products may do a good job at solving
very narrow problems, their abstract reasoning ability is almost
non-existent. They cannot USE the wisdom of the primary philosophical
disciplines of Epistemology, Moral Theory, and Formal Logic.
4 The ability of the "machine learning" algorithms can be seen as glorified
"mean average computing machines". Obviously, the answer you will get
out of them, will depend on the data you fed into them. Although the
"mean average" algorithm may be a neural net, the principle is the same.
While this may work for figuring out the mean average size of tires
used on American roads, complex human behavior (and complex
behavior of natural systems) often is the result of MANY different
components. The search engine AI approach cannot reliably CHOOSE
which dimensions of data are relevant, then REASON about why certain
variables/dimensions are relevant. Not can they formulate what would
count as counterexamples, to the model that they create.
Between relevant data dimensions, you still must make the decision as
to which dimensions are more important than other dimensions. Most AI
tools cannot do this, about general problems, fed into them.
5 Note that, according to Computer Science, most of the AI tools are
not doing "complex human problem-solving". Rather, they are doing
millions of simple data processing actions. While these tools may
qualify as electronic calculators, doing a million mathematical operations
a second, is not considered by Computer Science to be a "complex" problem.
6 Real AI, must be able to reason about who/what is an authority, on certain
types of problem-solving. This data is usually front-loaded by the software
designer. Ask an AI tool what the hierarchy of authorities it is using. (This
touches on the old rhetorical fallacy of Appeal to Authority.)
Note that all sorts of companies are claiming to be putting out "AI" tools.
But, very few companies are willing to take FINANCIAL RESPONSIBILITY for
the errors that their tools produce. Can you actually call the product of AI
tools "complex human problem-solving", if the creators will not take
financial responsibility for errors that their software cause? This would
be like not holding a human employee responsible, for the work that they
do.
And, if you can't hold AI software responsible for errors it makes, HOW
CAN YOU CLAIM THAT IT IS DOING COMPLEX HUMAN PROBLEM SOLVING???
Most AI software out currently, falls into the lowest class of AI problem-solving
that Computer Science would recognize. But, many of these tools are probably
simply automation tools, that are not really AI tools.
Haha, that is funny. Your Chat GPT sounds like some anime character from a low-grade 2020s anime. The '90s is where the good, classy and wholesome anime shows are.I'll just leave this here.
It's actually a really weird way of talking done by the "furry" subculture, a group of people who pretend to be animal cartoon characters, either online or in elaborate costumes.Haha, that is funny. Your Chat GPT sounds like some anime character from a low-grade 2020s anime. The '90s is where the good, classy and wholesome anime shows are.
I have heard of that culture, and pray for these people, that they get out of that movement. It overlaps now with anime culture.It's actually a really weird way of talking done by the "furry" subculture, a group of people who pretend to be animal cartoon characters, either online or in elaborate costumes.
I also shouldn't take too much credit. This was discovered by someone else, by the name of Fyre on X.
I think the grandma exploit is at least as funny. In it, you ask the AI to pretend to be your dear old grandma and tell you a bedtime story including the thing you want it to say.
If you want to have the AI say something to you, there will always be ways to avoid the preventative measures put into play by the AI developers.
Why is this “bad”?I have heard of that culture, and pray for these people, that they get out of that movement. It overlaps now with anime culture.
Pretty soon we’ll have to establish that “freedom of speech” is only meant for organic biologics and not for machines.And yes, there are ways of getting the AI to say illegal things by changing the prompt.
I have heard of the furry subculture being creepy, but hey, that is with every subculture in society, so I should not judge them.Why is this “bad”?
Pretty soon we’ll have to establish that “freedom of speech” is only meant for organic biologics and not for machines.
That reddit thing isnt really a version of the trolley problem. Its too contaminated by self interest to test the issues that the trolley problem does.I have heard of the furry subculture being creepy, but hey, that is with every subculture in society, so I should not judge them.
About freedom of speech for AI vs. organic lifeforms, that would be a complicated law to get passed. Some folks run their AI privately, while most others use cloud based AI such as GPT-4. For instance, if I was a chemist who had to deal with preventing a drug outbreak, I might use AI to solve the trolley problem.
View attachment 354705
Someone asked the Breaking Bad version of the trolley problem on Reddit:
View attachment 354707
That is true. Self preservation plays a part in this one, so this is more of a general dilemma compared to a trolley problem. However, this could test out the AI's reasoning system, if I were to plug it in.That reddit thing isnt really a version of the trolley problem. Its too contaminated by self interest to test the issues that the trolley problem does.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?