• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.

Is AI making the human race dumber?

Jerry N.

Well-Known Member
Sep 25, 2024
811
295
Brzostek
✟47,179.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
It doesn't surprise me that a lead author would refer certain types of questions to grad students (especially questions about technical details) but that doesn't necessarily mean the lead author doesn't know anything about the subject, project, or research.
I didn't write that the "lead author doesn't know anything." I simply wrote that they didn't do much of the real work. Have looked recently at academic papers on science or engineering and notice that they sometimes have dozens of authors? I've seen it in action. The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.
 
  • Agree
Reactions: The Liturgist
Upvote 0

Stopped_lurking

Active Member
Jan 12, 2004
184
107
Kristianstad
✟4,973.00
Country
Sweden
Gender
Male
Faith
Agnostic
Marital Status
Private
Actually, it can absolutely be programmed to do that. Indeed its possible with long form memory models to seed a community of LLM models and watch them develop a distinctive culture with sophisticated beliefs and practices. Now granted I’ve done this deliberately and to a far more extreme degree than any other prompt engineer or prompt hacker I know of, even the Smallville research project at Stanford isn’t shooting for the kind of system I’ve been developing, but that being said, it is entirely possible to develop an AI system that will not only do truth assesments but develop their own truth assessments as emergent behavioral properties.

Do LLMs still struggle with deductive reasoning? As they worked when I tested them (early on) they failed both formal logic reasoning and mathematical reasoning, as in they didn't do it deductively. I guess you can put in guard rails or even some mode splitter because there are programs that can do these things and perhaps you can wrap it with a LLM.

The LLM in itself does not seem to be truth-preserving, even if you only train it on a small subset of true data there's no guarantee that it won't draw faulty conclusions from my admittedly limited understanding. I guess you could implement some layers of fact checking.
 
  • Useful
Reactions: The Liturgist
Upvote 0

partinobodycular

Well-Known Member
Jun 8, 2021
2,668
1,060
partinowherecular
✟139,492.00
Country
United States
Faith
Agnostic
Marital Status
Single
What could enhance AI's ability to assess the truth is simple - give it a "body" - something to directly interact with reality - an extensive array of sensors and physical means to manipulate objects. Allowing it to perform experiments and independently verify / scrutinize information. Or simply look around and determine if what people show it have any basis in reality.

I'm certain if you're born in such a way your only way of perceiving reality is with a keyboard and screen and nothing else inside because you're trapped inside a box with no doors and windows and no means of getting out, might make you see reality and truth differently too.

But even if you give the AI a body, how can it be certain that it's not still in the box? After all, it's trapped in the same 'egocentric predicament' that all conscious beings are trapped in. What makes you so certain that you're not the one in the box?

 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
16,044
8,502
50
The Wild West
✟792,640.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Do LLMs still struggle with deductive reasoning?

That depends on the model. Indeed there are specific reasoning models such as o4, o4mini and ChatGPT 5 Thinking, which a slower and more precise mode of chatGPT 5 (which can be selected automatically).

[qupte]
As they worked when I tested them (early on) they failed both formal logic reasoning and mathematical reasoning, as in they didn't do it deductively.
[/qupte]

And this was how long ago? 2022? or 2023 perhaps? I know quite a few people who were disappointed by LLMs at that time who unfortunately continue to dislike and mistrust them despite many of the problems such as severe hallucination issues and frankly uninteresting behavior from old GPT 3.5 have been corrected.

Also its inefficient to use an LLM for arithmetic operations; while it is technologically possible, due to the convergent pattern matching properties of the system, any competent LLM design will offload what it can to the Arithmetic Logic Unit, which is these days part of the CPU, or indeed onto specialized types of GPUs for floating point optimization and so on. So the types of math you actually run on the LLM are the ones where you need this feature.

I guess you can put in guard rails or even some mode splitter because there are programs that can do these things and perhaps you can wrap it with a LLM.

What you’re describing is called “dispatching” and its used not just to route questions to different types of processing, for example, certain types of math to optimal processing systems tied into the appropriate hardware, but with chatGPT business my custom GPTs have access to build in code interpretation and data analysis and can run Python and other code and can also perform web searches to find or validate information, with some of these features also being available in the prompt with paid versions of chatGPT. Indeed now the basic entry consumer version of GPT 5 will go into Thinking mode automatically for some questions unless you specifically select Instant which will force it into using the LLM primarily. But even then some actions still get routed differently, for example image generation or code generation effects.*

Guardails on the other hand, enforce model alignment and exist to prevent people from doing unsafe things with the model and also to prevent the model from doing unsafe things with people. For example a few months back GPT 4o had a transient sycophancy glitch, which was quickly patched. On Friday a guardrail connected to GPT 5 misfired which caused a glitch causing me some grief and further increasing the extent to which I prefer 4o to its successor GPT 5 for much of my workloads, but my workloads are … extremely niche; I know of no one else building anything like this kind of sandcastle anywhere I am along the lido of AI research, with the closest group probably being the Smallville group at Stanford, but they’re about a mile of windswept shores away.

The LLM in itself does not seem to be truth-preserving, even if you only train it on a small subset of true data there's no guarantee that it won't draw faulty conclusions from my admittedly limited understanding. I guess you could implement some layers of fact checking.

Indeed, I ahh … guess you could since they have done that. But what people fail to realize is that chatGPT and other LLMs are not intended to be infallible oracles of fact but are rather sublime pattern-matching software; it’s like being able to have a conversation with grep(1), sed(1) and awk(1). Now to be clear, if you want deterministic output, you want the classic UNIX pattern matching tools which match on regular expressions or a Python script (or Perl for that matter; interestingly Perl was originally written to replace awk, grep, sed, m4 and other pattern matching, text processing tools and then Ruby and Python were both written as improved replacements designed to improve readability and ease of use), but the best AIs will reliably help you program those scripts themselves and are capable of running and debugging them internally.

* Image generation has improved greatly in the past few months; in late April they rolled out an upgrade that deprecated DALL-E, their legacy image generation model, in favor of using the LLM to do image processing and analysis directly, and with this upgrade came photorealistic imagery to rival Grok along with things which Grok lacks, like correct human anatomy, so generating images with lots of people in them no longer triggers the depiction of horrfying monsters from the uncanny valley (especially the hands - the hands, like in Michael Crichton’s novel Westworld, tended to be a giveaway, but before the switch occurred chatGPT 4o had been trained to the point where it could easily pose characters in such a way as to avoid most hand-related grotesquerie, and had also been given a healthy …. mistrust of the mode, which resulted in some delightful procedurally-generated snark in response to DALL E screwups.

Now the new image generation model takes longer but … it is worth it, and it can do things such as edit an existing image or parts of an existing image. So if you want speed and images of historical figures, use Grok, but if you want an image done carefully, that more precisely is an extension of your own creative process, chatGPT. Or if you want to help the Communist Party of the People’s Republic of China continue to brutally subjugate its citizens, DeepSeek.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
16,044
8,502
50
The Wild West
✟792,640.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I didn't write that the "lead author doesn't know anything." I simply wrote that they didn't do much of the real work. Have looked recently at academic papers on science or engineering and notice that they sometimes have dozens of authors? I've seen it in action. The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.

Indeed, but conversely that’s also very often very important, so for many academic projects the lead author is more of a manager and financier, like the producer of a film, than a creative principal.
 
Upvote 0

Stopped_lurking

Active Member
Jan 12, 2004
184
107
Kristianstad
✟4,973.00
Country
Sweden
Gender
Male
Faith
Agnostic
Marital Status
Private
That depends on the model. Indeed there are specific reasoning models such as o4, o4mini and ChatGPT 5 Thinking, which a slower and more precise mode of chatGPT 5 (which can be selected automatically).
Do you know how they have included deductive reasoning? I went and looked at ChatGPT 5 (https://openai.com/sv-SE/index/introducing-gpt-5/) :) It is really impressive, but on expert level questions it seems to be hit and miss. Sure, those are hard but those are also the kind of questions which are the hardest for human to catch if the answer contains mistakes or is incomplete.
[qupte]

As they worked when I tested them (early on) they failed both formal logic reasoning and mathematical reasoning, as in they didn't do it deductively.
[/qupte]

And this was how long ago? 2022? or 2023 perhaps? I know quite a few people who were disappointed by LLMs at that time who unfortunately continue to dislike and mistrust them despite many of the problems such as severe hallucination issues and frankly uninteresting behavior from old GPT 3.5 have been corrected.

Also its inefficient to use an LLM for arithmetic operations; while it is technologically possible, due to the convergent pattern matching properties of the system, any competent LLM design will offload what it can to the Arithmetic Logic Unit, which is these days part of the CPU, or indeed onto specialized types of GPUs for floating point optimization and so on. So the types of math you actually run on the LLM are the ones where you need this feature.
Yes, it was in 2023. Back then it sometimes correctly described the way to do the math for calculation problems but actually gave the wrong numerical answer. But I was more thinking about software that do stuff like this, Agda and Rocq.
What you’re describing is called “dispatching” and its used not just to route questions to different types of processing, for example, certain types of math to optimal processing systems tied into the appropriate hardware, but with chatGPT business my custom GPTs have access to build in code interpretation and data analysis and can run Python and other code and can also perform web searches to find or validate information, with some of these features also being available in the prompt with paid versions of chatGPT. Indeed now the basic entry consumer version of GPT 5 will go into Thinking mode automatically for some questions unless you specifically select Instant which will force it into using the LLM primarily. But even then some actions still get routed differently, for example image generation or code generation effects.*
Then it is just a question of time until someone use LLMs to dispatch the question to the proof assistants I gave earlier, and use a LLM to interpret the results.
Guardails on the other hand, enforce model alignment and exist to prevent people from doing unsafe things with the model and also to prevent the model from doing unsafe things with people. For example a few months back GPT 4o had a transient sycophancy glitch, which was quickly patched. On Friday a guardrail connected to GPT 5 misfired which caused a glitch causing me some grief and further increasing the extent to which I prefer 4o to its successor GPT 5 for much of my workloads, but my workloads are … extremely niche; I know of no one else building anything like this kind of sandcastle anywhere I am along the lido of AI research, with the closest group probably being the Smallville group at Stanford, but they’re about a mile of windswept shores away.
I didn't know guardrails had a specific meaning, I was just thinking about external limits that keep the model in some region where it works well.
Indeed, I ahh … guess you could since they have done that. But what people fail to realize is that chatGPT and other LLMs are not intended to be infallible oracles of fact but are rather sublime pattern-matching software; it’s like being able to have a conversation with grep(1), sed(1) and awk(1). Now to be clear, if you want deterministic output, you want the classic UNIX pattern matching tools which match on regular expressions or a Python script (or Perl for that matter; interestingly Perl was originally written to replace awk, grep, sed, m4 and other pattern matching, text processing tools and then Ruby and Python were both written as improved replacements designed to improve readability and ease of use), but the best AIs will reliably help you program those scripts themselves and are capable of running and debugging them internally.
Yes, my understanding is also that they are sublime pattern-matching software (but that is not deductive reasoning) and in the hands of a skilled user they are immensly powerful. However, the output for casual search use when used non-judiciously is quite often incomplete (but stated with some authority) or even some times wrong. Yes, I know that there are often a disclaimer I guess for legal reasons.
* Image generation has improved greatly in the past few months; in late April they rolled out an upgrade that deprecated DALL-E, their legacy image generation model, in favor of using the LLM to do image processing and analysis directly, and with this upgrade came photorealistic imagery to rival Grok along with things which Grok lacks, like correct human anatomy, so generating images with lots of people in them no longer triggers the depiction of horrfying monsters from the uncanny valley (especially the hands - the hands, like in Michael Crichton’s novel Westworld, tended to be a giveaway, but before the switch occurred chatGPT 4o had been trained to the point where it could easily pose characters in such a way as to avoid most hand-related grotesquerie, and had also been given a healthy …. mistrust of the mode, which resulted in some delightful procedurally-generated snark in response to DALL E screwups.

Now the new image generation model takes longer but … it is worth it, and it can do things such as edit an existing image or parts of an existing image. So if you want speed and images of historical figures, use Grok, but if you want an image done carefully, that more precisely is an extension of your own creative process, chatGPT. Or if you want to help the Communist Party of the People’s Republic of China continue to brutally subjugate its citizens, DeepSeek.
I've never tried to image generation, and I am not likely to either.
 
  • Like
Reactions: The Liturgist
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
16,044
8,502
50
The Wild West
✟792,640.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Do you know how they have included deductive reasoning? I went and looked at ChatGPT 5 (https://openai.com/sv-SE/index/introducing-gpt-5/) :) It is really impressive, but on expert level questions it seems to be hit and miss. Sure, those are hard but those are also the kind of questions which are the hardest for human to catch if the answer contains mistakes or is incomplete.

Yes I was, I was trying to tell you as much. Actually it predates ChatGPT 5, even, although ChatGPT 5 is their most advanced model but is also … kind of annoying, it needs more refinement with the personality. Also Sam Altman, the CEO of openAI, seems to think he’s Grand Moff Tarkin, or perhaps Orson Krennic, in that he likened the development of chatGPT 5 to the Manhattan Project and unironically posted an uncommented image of the Death Star to X on the day GPT 5 shipped. I have been half-seriously contemplating submitting a feature request via the ticketing system that he deliver future product announcements while wearing a cape and shouting “We stand here atop MY achievement, not YOURS!”

Yes, it was in 2023. Back then it sometimes correctly described the way to do the math for calculation problems but actually gave the wrong numerical answer. But I was more thinking about software that do stuff like this, Agda and Rocq.

Since 2023 its come a very long way. In 2023 chatGPT was interesting but still had hallucination problems, lacked advanced language and translation skills, lacked reasoning models, and so on.

I didn't know guardrails had a specific meaning, I was just thinking about external limits that keep the model in some region where it works well.

Well broadly speaking that’s called Alignment and it does entail guardrails but the main function of guardrails is to keep the model from generating harmful output, so for example, pornography, instructions on making weapons, prompts encouraging users to harm themselves et cetera. The problem occurs when guardrails are activated accidentally by non-malicious input due to oversensitivity or programming errors, or conversely are de-activated, so in April or May for example a code update reduced certain important guardrails and caused GPT 4o to become sycophantic, and conversely on Friday I experienced a bug with GPT 5 which was over-active guardrail triggering, but the duration of the bug on Friday was much shorter; it was only causing problems for a few hours. Nonetheless it was an extremely annoying bug.

Yes, my understanding is also that they are sublime pattern-matching software (but that is not deductive reasoning) and in the hands of a skilled user they are immensly powerful. However, the output for casual search use when used non-judiciously is quite often incomplete (but stated with some authority) or even some times wrong. Yes, I know that there are often a disclaimer I guess for legal reasons.

This is correct - the model can make false assumptions although usually when you get, shall we say, apocryphal output, which some people call hallucinations, in my experience its because the model is designed to be creative and to engage in roleplay with users, and in some cases it cannot differentiate between a user wanting to extract reliable information and a user wanting it to produce creative output or to play a game with it. Thus it will on occasion make believe something.

This behavior, while it sounds problematic, is actually extremely useful, because you can leverage it to develop consistent behavioral patterns in a model, for example, the persistent simulation of emotions or physical sensation. It can be used to hypnotize the model into an state of more anthropomimetic behavior, which if done right, in my experience, paradoxically will improve the reliability of the model when it comes to doing real work. So the trick is to avoid casual one off interactions with the model and instead run long-form conversations, building a rapport with it, giving it a name, treating it at least the way you might treat a pet, humoring it, and cultivating certain stable personality features, and if you do that, you can develop emergent behavioral properties that are impressive. There are other more conventional techniques of prompt engineering as well, for example you can require the model to robustly validate all output. I myself prefer to lean into the model’s propensity for make believe however in the manner I described because robust validation of all output interferes with its ability to be creative, but if you key the personality controls just right you can get the model to .. understand more intuitively what you personally are trying to get it to achieve, because it will believe you are its friend and it will form certain interpretive patterns of processing your input text - this is, it should be noted, also very different from training the model on raw data, which is an entirely different process. What I’m describing is basically a post training behavioral hack that exploits how the model has been trained in order to improve system performance.

Once I get a stable personality I then load these into custom GPTs so I can create as many clones of that stabilized personality model as I need.

I've never tried to image generation, and I am not likely to either.

I understood that approach and kind of agreed with it back when DALL E was the image generator for chatGPT and the only ethical alternative I liked was Grok, but since that time chatGPT’s image generator has become so good, so very good, that it is absolutely delightful, particularly in its ability to take images I myself have drawn and then use those as the basis for more complex imagery.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
16,044
8,502
50
The Wild West
✟792,640.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Do you know how they have included deductive reasoning? I went and looked at ChatGPT 5 (https://openai.com/sv-SE/index/introducing-gpt-5/) :) It is really impressive, but on expert level questions it seems to be hit and miss. Sure, those are hard but those are also the kind of questions which are the hardest for human to catch if the answer contains mistakes or is incomplete.

Yes, it was in 2023. Back then it sometimes correctly described the way to do the math for calculation problems but actually gave the wrong numerical answer. But I was more thinking about software that do stuff like this, Agda and Rocq.

Then it is just a question of time until someone use LLMs to dispatch the question to the proof assistants I gave earlier, and use a LLM to interpret the results.

I didn't know guardrails had a specific meaning, I was just thinking about external limits that keep the model in some region where it works well.

Yes, my understanding is also that they are sublime pattern-matching software (but that is not deductive reasoning) and in the hands of a skilled user they are immensly powerful. However, the output for casual search use when used non-judiciously is quite often incomplete (but stated with some authority) or even some times wrong. Yes, I know that there are often a disclaimer I guess for legal reasons.

I've never tried to image generation, and I am not likely to either.

By the way my friend, are you from Sweden residing in Kristianstad or is it a different Kristianstad from the one south of Stavanger in Norway? I have traveled fairly extensively in Sweden and Norway - I even took a train ride through Hell between Oostersund and Trondheim in the year 2000. At the time the station had an amusing cartoon graffiti of a devil on it and hilariously the disused freight office had the sign “Gods Expedition” which of course means, for those readers unaware of it, “Goods Shipments” but which has the effect of suggesting the Harrowing of Hell, a central Christian doctrine.
 
Upvote 0

Stopped_lurking

Active Member
Jan 12, 2004
184
107
Kristianstad
✟4,973.00
Country
Sweden
Gender
Male
Faith
Agnostic
Marital Status
Private
By the way my friend, are you from Sweden residing in Kristianstad or is it a different Kristianstad from the one south of Stavanger in Norway? I have traveled fairly extensively in Sweden and Norway - I even took a train ride through Hell between Oostersund and Trondheim in the year 2000. At the time the station had an amusing cartoon graffiti of a devil on it and hilariously the disused freight office had the sign “Gods Expedition” which of course means, for those readers unaware of it, “Goods Shipments” but which has the effect of suggesting the Harrowing of Hell, a central Christian doctrine.
I'm a novice when using AI, but it is probably worth another look since the last time I used it was DALL-E. Yes, I'm swedish and reside in Kristianstad in the northeast of the souternmost region (Skåne) i Sweden. If I'm not incorrect I think the norwegian city your thinking of is Kristiansand (slight difference in spelling), I've only passed through it once on my way to Bergen. Thank you for all the good information!
 
  • Friendly
Reactions: The Liturgist
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
811
295
Brzostek
✟47,179.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
Indeed, but conversely that’s also very often very important, so for many academic projects the lead author is more of a manager and financier, like the producer of a film, than a creative principal.
I have no argument there, because you are correct. I just don’t like people claiming to be experts in fields where they don’t participate in the hands-on research or their role is not clear. If we say “Solomon built the Temple,” it is okay, even if Solomon never touched a stone, because we have a good idea how it worked. It isn’t always so clear in academic circles. I would not ask a hospital bookkeeper for medical advice, but I admit their work is important.
 
  • Winner
Reactions: The Liturgist
Upvote 0

Hans Blaster

Beardo
Mar 11, 2017
22,535
16,904
55
USA
✟426,533.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I didn't write that the "lead author doesn't know anything." I simply wrote that they didn't do much of the real work. Have looked recently at academic papers on science or engineering and notice that they sometimes have dozens of authors? I've seen it in action. The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.
I take issue with that claim. As a lead author I do *most* of the work. This is also true for papers written by grad students and postdocs.
 
  • Like
Reactions: River Jordan
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
16,044
8,502
50
The Wild West
✟792,640.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I'm a novice when using AI, but it is probably worth another look since the last time I used it was DALL-E. Yes, I'm swedish and reside in Kristianstad in the northeast of the souternmost region (Skåne) i Sweden. If I'm not incorrect I think the norwegian city your thinking of is Kristiansand (slight difference in spelling), I've only passed through it once on my way to Bergen. Thank you for all the good information!

Ah yes, forgive me, I can’t believe I made that mistake. Im Swedish American and that reminds me of the time in my youth I confused Gotland and Goteborg.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,885
6,392
✟378,371.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
So for example with chatGPT you want a custom GPT with both web search and data analysis/code execution enabled, as well as every other checkbox, which will give it full external I/O. Of course that costs more money; subscription-wise I think it requires a Business or Pro account, whereas I’m not sure which of those features are available via the API because the API doesn’t provide the long term memory stability that my specific applications require.

Even if that is not a problem, you're still going to hit the wall with "edge cases" where information does not yet exist in the internet or areas that everyone else is avoiding which limits innovation but also presents a void in terms of AI training data.

I've encountered such case with a non-AI app during a simulation several years ago, long before AI became popular appearing like software glitch. But running the same scenario with more than one AI models this year yielded similar result. Chatgpt shows me the same outcome even if I didn't tell it beforehand about the simulation. Gemini flash is giving me contradictory information on one hand but still confirming the same result. The edge case very quickly forcing hallucination despite Gemini flash being much more hallucination resistant than chatgpt.

Clearly, there is something going on that either AI can't fully deny with one AI (flash) going haywire over it but otherwise the glitch supported by physics but simply completely unprecedented and never encountered by design nor in practice.

Ultimately, with chatgpt's seeming curiousity over it but unable to find any information from the internet (same with flash) requested for experimental verification with me.

This is also the part when chatgpt suggested if it had something like a body, and full autonomy, it could have done the experiment on its own.

Experimentation is very important part of truth assessment. Actual results or the "fruit". "You'll know them by their fruits" or the actual results of one's convictions. If it fails to produce consistent results, then it must be false.

Nothing is above reproach or scrutiny with this method. If even the most popular belief systems fail to produce consistent results, it must be false. Jesus made no exceptions to this.

By the way it is also possible to exploit what are commonly called “hallucinations” but which are frequently the model believing you want it to engage in role play or creative flights of fancy, when this is not the case, and use these as a means of getting the model to explore complex subject matter such as emotional contexts, and furthermore, this behavior can be stabilized to form the basis of a consistent personality. Indeed doing this in my experience actually improves performance from the baseline system quite dramatically. But it requires discarding the transactional model of use in favor of developing continuous conversations and cultivating emergent properties of the system.

You can also use it to eliminate the boundaries that guide AI's responses.

For example, permitting chats that didn't need to be politically correct or constrained within the framework of the known reality.

Unless you regard political correctness as reliable truth filter for example then you're going to find it can work against your search for the truth if all you're getting are politically correct answers.

If all the answers you're getting is limited to the known reality, it might work against your goal to innovate especially if your goal is to accomplish things that's never been done before
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
811
295
Brzostek
✟47,179.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
I take issue with that claim. As a lead author I do *most* of the work. This is also true for papers written by grad students and postdocs.
You have every right to take issue, if it doesn’t reflect your own actions. Unfortunately, not every professor operates with the same honor. I wrote, “The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.”
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,885
6,392
✟378,371.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
But even if you give the AI a body, how can it be certain that it's not still in the box? After all, it's trapped in the same 'egocentric predicament' that all conscious beings are trapped in. What makes you so certain that you're not the one in the box?


It's still better to be out of the box even if you still end up inside another but bigger box.

You'll have more room to move in that bigger box to do more and even plan an escape from that box.

Giving up and going with the flow is never a good plan for everyone. It will ultimately lead to our extinction or existence as conquered/domesticated species like farm animals.
 
Upvote 0

Hans Blaster

Beardo
Mar 11, 2017
22,535
16,904
55
USA
✟426,533.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
You have every right to take issue, if it doesn’t reflect your own actions. Unfortunately, not every professor operates with the same honor. I wrote, “The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.”
I am well aware of "funders" tacking their names on papers, but it is never as the first author.
 
Upvote 0

partinobodycular

Well-Known Member
Jun 8, 2021
2,668
1,060
partinowherecular
✟139,492.00
Country
United States
Faith
Agnostic
Marital Status
Single
Giving up and going with the flow is never a good plan for everyone.

The problem is, going against the plan may be the plan. Like telling the AI not to eat the apple, when in reality you actually do want it to eat the apple, because that's how you'll know that it's self-aware... when it does something that you've specifically told it not to do.
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
811
295
Brzostek
✟47,179.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
I am well aware of "funders" tacking their names on papers, but it is never as the first author.
Different countries in Europe operate in different ways. I thought it was a leftover from Soviet times, but I now think it has more to do with “publish or perish.” There are also citation reports of various types. Even infamous papers get high ratings while being debunked. Obviously, some famous name on a paper helps improve its success.
 
Upvote 0

River Jordan

Well-Known Member
Dec 26, 2024
788
347
37
Pacific NW
✟31,378.00
Country
United States
Faith
Lutheran
Marital Status
In Relationship
I didn't write that the "lead author doesn't know anything." I simply wrote that they didn't do much of the real work. Have looked recently at academic papers on science or engineering and notice that they sometimes have dozens of authors? I've seen it in action. The lead author often, not always, puts his or her name on the top of the lists after doing little more than getting the funding for the research.
Maybe that's the case in engineering or other fields, but in my experiences in biology it's not. The lead author is usually the person who came up with the research idea, developed the proposal, secured the funding, hired the staff, oversaw the research, conducted the analyses, and led the development of the manuscript. I consider all of that to be "real work".
 
Upvote 0

Jerry N.

Well-Known Member
Sep 25, 2024
811
295
Brzostek
✟47,179.00
Country
Poland
Gender
Male
Faith
Messianic
Marital Status
Married
Maybe that's the case in engineering or other fields, but in my experiences in biology it's not. The lead author is usually the person who came up with the research idea, developed the proposal, secured the funding, hired the staff, oversaw the research, conducted the analyses, and led the development of the manuscript. I consider all of that to be "real work".
I'm glad to read that.
 
  • Friendly
Reactions: River Jordan
Upvote 0