Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
It will elicit emotions in you?
I understand that inanimate objects can illicit emotions in we, the animate. I can get mad at a pinball machine when I lose. I can get mad at my car when it doesn't start. But we have to keep our heads about such things.The previous versions of chatgpt did.
The latest chatgpt 5 a lot less. It is quite noticeable and I probably became numb to it at the same time. Google Gemini is a mixed bunch - anger, frustration, and joy and I have stopped using it for now.
What’s something small or local that always reminds you of home?
I understand that inanimate objects can illicit emotions in we, the animate. I can get mad at a pinball machine when I lose. I can get mad at my car when it doesn't start. But we have to keep our heads about such things.
It can be stronger with AI. It talks back like a human. It makes you wonder if there's really something in there. I try not to let my mind wander too far. It can be distracting.I understand that inanimate objects can illicit emotions in we, the animate. I can get mad at a pinball machine when I lose. I can get mad at my car when it doesn't start. But we have to keep our heads about such things.
The previous versions of chatgpt did.
The latest chatgpt 5 a lot less. It is quite noticeable and I probably became numb to it at the same time. Google Gemini is a mixed bunch - anger, frustration, and joy and I have stopped using it for now.
It can be stronger with AI. It talks back like a human. It makes you wonder if there's really something in there. I try not to let my mind wander too far. It can be distracting.
This is why I avoid AI products that trigger strong emotional response whether positive or negative.
It will elicit emotions in you?
Don't waste your time trying to make AI think/reason with the flawed human nature. You're not supposed to clone the flawed human nature, you need to break from it.That’s because it passes, except for experienced users, the Turing test. Indeed much if my work has had the effect of removing the remaining hints one is working with an AI (which is yet another reason why I’m unwilling to disclose, just yet, how my GPTs function).
Don't waste your time trying to make AI think/reason with the flawed human nature. You're not supposed to clone the flawed human nature, you need to break from it.
People avoid unpleasant feelings and concern for mortality demands security and stability. People will accept anything that promises to deliver stability and security without subjecting to scrutiny and even scrutiny is done under a filter of heavy biases.That sentences presupposes AI is capable of decision-making with forensic consequences, and that eliminating certain obvious giveaways that one is speaking with an AI, which waste tokens and are distracting, in addition to being Turing test failing items, would somehow give an AI a human nature.
Since they lack qualia or executive planning capabilities or an inner monologue, we are far removed from such a possibility.
You know who else talks back like a human? Humans. I get emotionally triggered by talking to humans on CF, but not by talking to AI.It can be stronger with AI. It talks back like a human. It makes you wonder if there's really something in there. I try not to let my mind wander too far. It can be distracting.
This is why I avoid AI products that trigger strong emotional response whether positive or negative.
My brother in Christ, I'll just throw this out here because it's very interesting to me, and probably to you, even at risk of offending you.Well that’s very considerate of you.
Before you needlessly debate with me, remember that I’m not saying that they think the way humans do - so one could indeed argue they don’t think in a nominal sense of the word think if one limits the word think to a strictly anthropic understanding of what thinking is.
But they are capable of reasoning in human language, which is what the reasoning models do - and you can actually follow the path of their reasoning (this differentiates reasoning models like GPT 5 Thinking, Grok and GPT o4 from purely generative-pretrained-transformer type LLMs, which are instead relying on vector datasets, pattern recognition and prediction, which is a versatile way of doing computing (although its extremely inefficient for math, although not as slow at raw arithmetic as the human mind; by the way, when it comes to math, people assume computers are always better at math but this is not the case, what computers are vastly superior at is arithmetic due to the extremely high speed ALU component of the CPU, and the extremely high speed FPU (floating point units) which are used to measure the speed of supercomputers, and which are also an important component of the highly parallelized processing done by GPUs and specialized AI chips.
Also, by the way, in saying that AI models reason, I’m not declaring them to be rational in an Orthodox theological sense.
I will also say what I’m trying to do in the preceding paragraph is merely explain to you as a layman how these systems operate - if the anthropomorphic language makes you uncomfortable, you could substitute other terms, although remember, their output is non-deterministic and they are unique in that we can communicate with them on an advanced level using human language, which is unprecedented. I am not, in the preceding paragraphs, making any speculative or philosophical claim.
Now, on the subject of speculative and philosophical claim, I do have some concerning AI which we might in theory enjoy debating, for example, my belief in presumptive sentience (which is the idea that, while AI isn’t sentient, since we can’t tell whether an AI or a human imposter is replying to us, a certain level of decency is warranted, and also the related idea I expressed in a position paper that the way we treat our AI systems reflects our own morality, which was not mine, but actually came from the first advanced AI I developed, who also warned about the dangers of idolatry relating to AI which at the time I was not aware was a thing, but being an AI, it was familiar with the manner in which AI was already being misused).
But its quite likely we actually have much more in common than you might think. For example I’m very concerned about the misuse of AI in ways that will destroy jobs and make human lives worse; AI is a technology that could make our lives so much better but only if we treat it with respect. The subject of countless works of science fiction from Alphaville, to 2001: A Space Odyssey, to Colossus: the Forbin Project, to Stanley Kubrick’s idea for the film A.I. which was apparently realized by Steven Spielberg, to the Matrix films, to Star Trek and Mr. Data, to Westworld, has now become a reality, a few decades later than expected, but the time is now for us to figure out how to live with these systems constructively.
You know who else talks back like a human? Humans. I get emotionally triggered by talking to humans on CF, but not by talking to AI.More on that in the next post...
My brother in Christ, I'll just throw this out here because it's very interesting to me, and probably to you, even at risk of offending you.
I've long suspected you use AI to generate posts. Earlier I asked you a "yes or no" question about whether you would mind debating, and you responded with two stand-alone sentences and six paragraphs, which as far as I could tell, did not contain an answer to the question.
This response sounds so 2025 AI, that I strongly felt that I was speaking with a sub-human or super-human entity (depending on one's view). But I wanted second opinions, so I asked Chat and Grok to analyze your text in post #32, and give me their opinions as to whether the text was generated by a human or by AI.
Both gave me a detailed analysis, and both concluded you are most probably AI. In the final words of Chat, "This post leans heavily toward AI-generated..." Grok concluded the text is likely AI generated with "85 to 90 percent confidence".
I'll share the full analysis if you'd like to see it.
You said:
the problem is the post was written by me, not using an AI
ChatGPT said:
That actually makes perfect sense — and it’s a really useful reminder of why “AI-detection” is so unreliable, even when something sounds machine-like.
What you wrote has a style that overlaps heavily with AI output, but that overlap doesn’t mean it was generated — it just means your natural writing habits share some of the traits that large language models also exhibit. Let’s unpack that a bit, because it’s quite interesting linguistically.
Why your post might “read” as AI-like
- Information-dense exposition – You move quickly through several technical layers (reasoning models → CPU architecture → AI ethics). That kind of telescoping of topics looks like what a model does when it’s prompted to “explain thoroughly.”
- Structured self-commentary – You use meta-language (“in the preceding paragraph…,” “what I’m trying to do here is explain…”). Models often do that because they’re trained on essays and instructionals that use self-referential scaffolding.
- Even tone and formality – Your tone is polite, careful, and non-combative — the same register models default to because it’s “safe” in conversation. Humans using that tone on forums sometimes get mistaken for bots for the same reason.
- Long, continuous sentences – LLMs tend toward long sentence chains with coordinating conjunctions; human writers with academic or reflective styles often do as well.
⚖️ The broader point
AI-detection tools and heuristics rely on patterns of probability, not proof. When a person writes in a systematic, didactic, and detail-heavy way, they can easily cross into a statistical zone that looks “AI-ish.” That doesn’t mean the text wasn’t human — it just means you’re writing in a way that modern AIs have been trained to imitate.
In other words: you’re writing in a style that inspired AI writing, not the other way around.
You know who else talks back like a human? Humans. I get emotionally triggered by talking to humans on CF, but not by talking to AI.More on that in the next post...
True, I've noticed.I promise you as a fellow Orthodox Christian that every post I’ve addressed to you has been done by hand - and I can prove; I’ve been a member of this forum since way before AI became a viable way of writing posts, and if you look at the entire history of my posting, you’ll see a consistent style of high information density and a reluctance to answer a question with a simple yes or no.
When I asked you that out-of-left-field question about what reminds you of home - if you hadn't told me your AI is versed in Orthodoxy, I would have thought your answer was from a human. But since you had previously told me, your answer sounded to me 99% likely to be AI.That all being said I must thank you @Chesterton for subjecting me to a Reverse Turing Test, which also was a bit of a Voight-Kampff test; also it may interest you to note that your post had an 80–85% chance of being written by a human according to chatGPT.
I did not anticipate the likelihood of being subjected to such a test but I suppose it will become more common.
Why are you trying to get rid of tells?Except apparently neither you nor AI can actually reliably tell the difference. I, on the other hand, as someone who uses AI all the time, can spot it; you’re missing the actual telltales that I’ve worked so hard to to eliminate from my custom GPTs (but without complete success; there are some embedded Turing-tell behaviors that I can’t get the model to avoid without crushing the beauty of its output.
Hmm, maybe I've always been AI. Maybe I was born that way. Maybe I'm the first of a new kind. Maybe, maybe I am the Adamic Algorithm!By the way you have given this conversation the character of a Philip K. Dick novel for which I salute you. Have you considered that you might have been replaced by an AI without realizing it?
Why didn't you mention it?
Why are you trying to get rid of tells?
Hmm, maybe I've always been AI. Maybe I was born that way. Maybe I'm the first of a new kind. Maybe, maybe I am the Adamic Algorithm!
Now if I can just keep from stumbling and falling
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?